m Electropaedia History of Science, Technology and Inventions. Key Scientists and Engineers and the Context and Explanations of their Contributions
Electropaedia logo

Battery and Energy Technologies

Panel Top
 

 

Spacer
End Cap
Finding your Way Around
Sponsors
Free Report

Buying Batteries in China

 
End cap

Spacer

Spacer

Green Cap

Woodbank does not monitor or record these emails

Green Cap
Spacer
Green Cap
Green Cap
Spacer
Spacer
End Cap
More Sponsors
 
End Cap

 

History of Technology


Outline Outline Outline
Outline
Outline Outline

Heroes and Villains - A little light reading

Here you will find a brief history of technology. Initially inspired by the development of batteries, it covers technology in general and includes some interesting little known, or long forgotten, facts as well as a few myths about the development of technology, the science behind it, the context in which it occurred and the deeds of the many personalities, eccentrics and charlatans involved.

"Either you do the work or you get the credit" Yakov Zel'dovich - Russian Astrophysicist

Fortunately it is not always true.

Outline Outline
Outline Outline Outline
Outline

You may find the Search Engine, the Technology Timeline or the Hall of Fame quicker if you are looking for something or somebody in particular.

See also the timelines of the Discovery of the Elements and Particle Physics and Quantum Theory.

Go Directly

to the Year

The Content - It's not just about batteries. Scroll down and see what treasures you can discover.


Background

We think of a battery today as a source of portable power, but it is no exaggeration to say that the battery is one of the most important inventions in the history of mankind. Volta's pile was at first a technical curiosity but this new electrochemical phenomenon very quickly opened the door to new branches of both physics and chemistry and a myriad of discoveries, inventions and applications. The electronics, computers and communications industries, power engineering and much of the chemical industry of today were founded on discoveries made possible by the battery.


Pioneers

It is often overlooked that throughout the nineteenth century, most of the electrical experimenters, inventors and engineers who made these advances possible had to make their own batteries before they could start their investigations. They did not have the benefit of cheap, off the shelf, mass produced batteries. For many years the telegraph, and later the telephone, industries were the only consumers of batteries in modest volumes and it wasn't until the twentieth century that new applications created the demand that made the battery a commodity item.

In recent years batteries have changed out of all recognition. No longer are they simple electrochemical cells. Today the cells are components in battery systems, incorporating electronics and software, power management and control systems, monitoring and protection circuits, communications interfaces and thermal management.


History of Technology from the Bronze Age to the Present Day


Circa 3000 B.C. At the end of the fourth millennium B.C. the World was starting to emerge from the Stone Age.

Around 2900 B.C., Mesopotamians (from modern day Iraq), who had already been active for hundreds of years in primitive metallurgy extracting metals such as copper from their ores, led the way into the Bronze Age when artisans in the cities of Ur and Babylon discovered the properties of bronze and began to use it in place of copper in the production of tools, weapons and armour. Bronze is a relatively hard alloy of copper and tin, better suited for the purpose than the much softer copper enabling improved durability of the weapons and the ability to hold a cutting edge. The use of bronze for tools and weapons gradually spread to the rest of the World until it was eventually superceded by the much harder iron.


Mesopotamia, incorporating Sumer, Babylonia and Assyria, known in the West as the Cradle of Civilisation was located between the Tigris and Euphrates rivers (The name means "land between the rivers") in the so called Fertile Crescent stretching from the current Gulf of Iran up to modern day Turkey. The ancient city of Babylon which served for nearly two millennia as a center of Mesopotamian civilization is located about 60 miles (100 kilometers) south of Baghdad in modern-day Iraq. (See Map of Mesopotamia).

Unfortunately this accolade ignores the contributions of the Chinese people and the Harappans of the Indus Valley, (Modern day Pakistan) who were equally "civilised" during this period practicing metallurgy (copper, bronze, lead, and tin) and urban planning, with civic buildings, baked brick houses, and water supply and drainage systems.


From around 3500 B.C. the Sumerians of ancient Mesopotamia developed the World's first written language. Called Cuneiform Writing from the Latin "cuneus", meaning "wedge", it was developed as a vehicle for commercial accounting transactions and record keeping. The writing was in the form of a series of wedge-shaped signs pressed into soft clay by means of a reed stylus to create simple pictures, or pictograms, each representing an object. The clay subsequently hardened in the Sun or was baked to form permanent tablets. By 2800 B.C. the script progressively evolved to encompass more abstract concepts as well as phonetic functions (representing sounds, just like the modern Western alphabet) enabling the recording of messages and ideas. For the first time news and ideas could be carried to distant places without having to rely on a messenger's memory and integrity.

Hieroglyphic script evolved slightly later in Egypt. Though the script appeared on vases and stone carvings, many important Egyptian historical scripts and records were written in ink, made from carbon black (soot) or red ochre mixed with gelatin and gum, applied with a reed pen onto papyrus. Produced from the freshwater papyrus reed, the papyrus scrolls were fragile and susceptible to decay from both moisture and excessive dryness and many of them have thus been lost, whereas the older, more durable clay cuneiform tablets from Mesopotamia have survived.


Historians seem to agree that the wheel and axle were invented around 3500 B.C. in Mesopotamia. Pictograms on a tablet dating from about 3200 B.C. found in a temple at Erech in Mesopotamia show a chariot with solid wooden wheels. Evidence from Ur indicates that the simpler potter's wheel probably predates the use of the axled wheel for transport because of the difficulty in designing a reliable mechanism for mounting the rotating wheel on a fixed hub or a rotating axle on the fixed load carrying platform.


Sumerian mathematics and science used a base 60 sexagesimal numeral system. 60 is divisible by 1,2,3,4,5,6,10,12,15,20,30 and 60 making it more convenient than using a base 10 decimal system when working with fractions. The Mesopotamians thus introduced the 60-minute hour, the 60-second minute and the 360-degree circle with each angular degree consisting of 60 seconds. The calendar adopted by the Sumerians, Babylonians and Assyrians was based 12 lunar months and seven-day weeks with 24-hour days. Since the average lunar month is 29.5 days, over 12 months this would produce a total of only 354 days as against a solar year of 365.25 days. To keep the calendar aligned to the seasons they added seven extra months in each period of 19 years, equivalent to the way we add an extra day in leap years. Despite decimalisation, we still use these sexagesimal measures today.


The Mesopotamians discovered glass, probably from glass beads in the slag resulting from experiments with refining metallic ores. They were also active in the development of many other technologies such as textile weaving, locks and canals, flood control, water storage and irrigation.

There are also claims that the Archimedes' Screw may have been invented in Mesopotamia and used for the water systems at the Hanging Gardens of Babylon.


2500 B.C. Sometimes known as the "Second oldest profession", soldering has been known since the Bronze Age (Circa 3000 to 1100 B.C.). A form of soldering to join sheets of gold was known to be used by the Mesopotamians in Ur. Fine metal working techniques were also developed in Egypt where filigree jewellery and cloisonné work found in Tutankhamun's tomb dating from 1327 B.C. was made from delicate wires which had been drawn through dies and then soldered in place.


Egypt was also home to Imhotep the first man of science in recorded history. He was the world's first named architect and administrator who around 2725 B.C. built the first pyramid ever constructed, the Stepped Pyramid of Saqqara. Papyri were unearthed in the nineteenth century dating from around 1600 B.C. and 1534 B.C. both of which refer to earlier works attributed to Imhotep. The first outlines surgical treatments for various wounds and diseases and the second contains 877 prescriptions and recipes for treating a variety of medical conditions making Imhotep the world's first recorded physician. Other contemporary papyri described Egyptian mathematics. Egyptian teachings provided the foundation of Greek science and although Imhotep's teachings were known to the Greeks, 2200 years after his death, they assigned the honour of Father of Medicine to Hippocrates.


2300 B.C. The earliest evidence of the art of stencilling used by the Egyptians. Designs were cut into a sheet of papyrus and pigments were applied through the apertures with a brush. The technique was reputed to have been in use in China around the same time but no artifacts remain.


2100-1600 B.C. The Xia dynasty in China perfected the casting of bronze for the production of weapons and ritual wine and food vessels, reaching new heights during the Shang dynasty (600-1050 B.C.).


Circa 2000 B.C. The process for making wrought iron was discovered by the Hittites, in Northern Mesopotamia and Southern Anatolia (now part of Eastern Turkey), who heated iron ore in a charcoal fire and hammered the results into wrought (worked) Iron. See more about wrought iron


1300 B.C. Fine wire also made by the Egyptians by beating gold sheet and cutting it into strips. Recorded in the Bible, Book of Exodus, Chapter 39, Verse 3, - "And they did beat the gold into thin plates, and cut it into wires, to work it. in the fine linen, with cunning work."

The Egyptians also made coarse glass fibres as early as 1600 B.C. and fibers survive as decorations on Egyptian pottery dating back to 1375 B.C..


1280 B.C. Around this date, after his escape from Egypt, Moses ordered the construction of the Ark of the Covenant to house the tablets of stone on which were written the original "Ten Commandments". Its construction is described in great detail in the book of Exodus and according to the Bible and Jewish legend it was endowed with miraculous powers including emitting sparks and fire and striking dead Aaron's sons and others who touched it. It was basically a wooden box of acacia wood lined with gold and also overlaid on the outside with Gold. The lid was decorated with two "cherubim" with outstretched wings. In 1915 Nikola Tesla, in an essay entitled "The Fairy Tale of Electricity" promoting the appreciation of electrical developments, proposed what seemed a plausible explanation for some of the magical powers of the Ark. He claimed that the gold sheaths separated by the dry acacia wood effectively formed a large capacitor on which a static electrical charge could be built up by friction from the curtains around the Ark and this accounted for the sparks and the electrocution of Aaron's sons.


Recent calculations have shown however that the capacitance of the box would be in the order of 200 pico farads and such a capacitor would need to be charged to 100,000 volts to store even 1 joule of electrical energy, not nearly enough to cause electrocution. It seems Tesla's explanation was appropriately named.


800 B.C. The magnetic properties of the naturally occurring lodestone were first mentioned in Greek texts. Also called magnetite, lodestone is a magnetic oxide of iron (Fe3O4) which was mined in the province of Magnesia in Thessaly from where the magnet gets its name. Lodestone was also known in China at that time where it was known as "love stone" and is in fact quite common throughout the world.

Surprisingly although they were aware of its magnetic properties, neither the Greeks nor the Romans seem to have discovered its directive property.


Eight hundred years later in 77 A.D., the somewhat unscientific Roman chronicler of science Pliny the Elder, completed his celebrated series of books entitled "Natural History". In it, he attributed the name "magnet" to the supposed discoverer of lodestone, the shepherd Magnes, "the nails of whose shoes and the tip of whose staff stuck fast in a magnetic field while he pastured his flocks". Thus another myth was born. Pliny was killed during the volcanic eruption of Mount Vesuvius near Pompeii in A.D. 79 but his "Natural History" lived on as an authority on scientific matters up to the Middle Ages.


600 B.C. The Greek philosopher and scientist, Thales of Miletus (624-546 B.C.) - one of the Seven Wise Men of Greece (Miletus is now in Turkey) - was the first thinker to attempt to explain natural phenomena by means of some underlying scientific principle rather than by attributing them to the whim of the Gods - a major departure from previous wisdom and the foundation of scientific method, frowned upon by Aristotle but rediscovered during the Renaissance and the Scientific Revolution.

He travelled to Egypt and the city state of Babylon in Mesopotamia (now modern day Iraq) and is said to have brought Babylonian mathematics back to Greece. The following rules are attributed to him:

  • Any angle inscribed in a semicircle is a right angle. Known as the Theorem of Thales it was however known to the Babylonians 1000 years earlier.
  • A circle is bisected by any diameter.
  • The base angles of an isosceles triangle are equal.
  • The opposite angles formed by two intersecting lines are equal.
  • Two triangles are congruent (equal shape and size) if two angles and a side are equal.
  • The sides of similar triangles are proportional

Using the concept of similar triangles he was able to calculate the height of pyramids by comparing the size of their shadows with smaller, similar triangles of known dimensions. Similarly he calculated the distance to ships at sea by noting the azimuth angle of the ship from a baseline of two widely spaced observation points a known distance apart on the shore and scaling up the distance to the ship from the dimensions of a smaller similar triangle. In this way he was able to calculate the distance to far off objects without measuring the distance directly, the basis of modern surveying.


Thales also demonstrated the effect of static electricity by picking up small items with an amber rod made of fossilised resin which had been rubbed with a cloth. He also noted that iron was attracted to lodestone.


Thales left no writings and knowledge of him is derived from an account in Aristotle's Metaphysics written nearly 300 years later and itself subject to numerous subsequent copies and translations.


530 B.C. Pythagoras of Samos (580-500 B.C.) an Ionian Greek, is considered by many to be the Father of Mathematics. Like Thales, he had travelled to Egypt and Babylon where he studied astronomy and geometry. His theorem. "In a right-angled triangle the square on the hypotenuse is equal to the sum of the squares on the other two sides" is well known to every schoolchild.

Around 530 BC, he moved to Croton, in Magna Graecia, where he set up a religious sect. His cult-like followers, were enthralled by numbers such as prime numbers and irrational numbers and considered their work to be secret and mystical. Prior to Pythagoras, mathematicians had dealt only in whole numbers and fractions or ratios but Pythagoras brought them into contact with √2 and other square roots which were not rational numbers.

Pythagoreans also discovered the Divine Proportion, also called the Golden Mean or Golden Ratio, an irrational number Φ (Phi) = (√5+1)/2 ≈ 1.618 which has fascinated both scientists and artists ever since.

(See examples of The Divine Proportion).

None of Pythagoras writings have survived and knowledge of his life and works is based on tradition rather than verified facts.


Circa 500 B.C. Cast Iron was produced for the first time by the Chinese during the Zhou dynasty (1046-256 B.C.). Prior to that, it had not been possible to raise the temperature of the ore sufficiently to melt the Iron and the only available Iron was wrought iron created by heating iron ore in a furnace with carbon as the reducing agent and hammering the resulting spongy Iron output. Furnaces of the day could reach temperatures of about 1300°C which was enough to melt copper whose melting point is 1083°C but not enough to melt Iron whose melting point is 1528°C. By a combination of the addition of phosphorus to the ore which reduced its melting point, the use of a bellows to pump air through the ore to aid the exothermic reduction process and the use of improved high temperature refractory bricks forming the walls of the furnace to withstand the heat, the Chinese were able to melt the Iron and cast it into functional shapes ranging from tools and pots and pans to heavy load bearing constructional members as well as fine ornamental pieces.

Cast Iron was not produced in Europe till around 1400 A.D.. Gun-barrels and bullets were the first cast Iron products to be manufactured but it was not until 1709 when Abraham Darby introduced new production methods that low cost, volume production was achieved.


See more about Chinese Inventions.


460 B.C. Another Greek philosopher Democritus of Abdera developed the idea that matter could be broken down into very small indivisible particles which he called atoms. Subsequently Aristotle dismissed Democritus' atomic theory as worthless and Aristotle's views tended to prevail. It was not until 1803 A.D. that Democritus' theory was resurrected by John Dalton.


380 B.C. Greek philosopher Plato (Circa 428-347 B.C.) composed the Allegory of the Cave as part of his major work, the Republic.

He believed that there were patterns or mathematical relationships, we now would call "science", behind natural phenomena which were often hidden from the observer and difficult to observe directly.

In his allegory he described a community of prisoners permanently chained from birth to the floor of a cave facing a blank wall with no possibility to look elsewhere. See diagram of Plato's Cave. Behind the prisoners was a low wall concealing from them an elevated walkway or stage. People could walk around this stage, out of sight of the prisoners, carrying 3D objects or puppets above their heads. A fire behind the stage next to the back wall of the cave illuminated these moving objects which cast shadows on the blank wall in front of the prisoners. Any sounds of the people talking, or other movements, echoed off the walls so that the prisoners believed these sounds came from the shadows.

For the prisoners, these shadows were the reality. This was their World. They had no way of knowing that a different true reality existed. If the reality were explained to them they would probably not believe it.


The cave allegory illustrated fundamental issues in science such as:

  • The observer's perception of reality suffers from incomplete information and the difficulty of interpreting the information which is avavailable.
  • It is dangerous to infer anything about reality based on our experiences.

Plato's observations still hold good today, 2400 years later, particularly with particle physics where all is not what it seems.


350 B.C. The Greek philosopher and scientist Aristotle (384-322 B.C.), student of Plato, provided "scientific" theories based on pure "reason" for everything from the geocentric structure of the cosmos down to the four fundamental elements earth, fire, air and water.


Aristotle believed that knowledge should be gained by pure rational thought and had no time for mathematics which he regarded only as a calculating device. Neither did he support the experimental method of scientific discovery, espoused by Thales, which he considered inferior. In his support it should be mentioned that the range of experiments he could possibly undertake was limited by the lack of suitable accurate measuring instruments in his time and it was only in the seventeenth century during the Scientific Revolution that such instruments started to become available.

Unfortunately Aristotle's "rational" explanations were subsequently taken up by St Thomas Aquinas (1225-1274 B.C.) and espoused by the church which for many years made it difficult, if not dangerous, to propose alternative theories. Aristotle's theories of the cosmos and chemistry thus held sway for 2000 years hampering scientific progress until they were finally debunked by Galileo, Newton and Lavoisier who showed that natural phenomena could be described by mathematical laws.

See also Gilbert (1600), Mersenne (1636), Descartes (644) and Von Guericke (166) and the Scientific Revolution.


Aristotle was also a tutor to the young Alexander the Great.


Like many sources from antiquity, Aristotle's original manuscripts have been destroyed or lost and we only know of Aristotle's works via series of copies and translations from the Greek into Arabic, then from Arabic into Latin and finally from Latin into English and other modern languages. There's much that could have been lost, changed or even added in the translations.


332 B.C. Alexander the Great conquered Egypt and ordered the building of a new city on the Egyptian, Nile delta named after himself - Alexandria. When he died in 323 B.C. his kingdom was divided between three of his generals, with Egypt going to Ptolemy (367-283 B.C.) who later declared himself King Ptolemy I Soter (not to be confused with Claudius Ptolemy (90-168 A.D.) and founded a new dynasty, replacing the Pharaohs, which lasted until the Roman conquest of 30 B.C.

Ptolemy Soter's grandest building project in the new capital was the Musaeum or "Temple of the Muses" (from which we get the modern word "museum") which he founded around 306 B.C.. A most important part of the Musaeum was the famous Library of Alexandria, which he conceived, and which was carried through by his son Ptolemy II Philadelphus, with the object of collecting all the world's knowledge. Most of the staff were occupied with the task of translating works onto papyrus and it is estimated (probably over-estimated) that as many as 700,000 scrolls, the equivalent of more than 100,000 modern printed books, filled the library shelves.

Great thinkers were invited to Alexandria to establish an academy at the library turning it into a major centre of scholarship and research. Euclid was one of the first to teach there. Ultimately the library overshadowed the Musaeum in importance and interest becoming perhaps the oldest university in the world.

It was at the library that:

  • Euclid developed the rules of geometry based on rigorous proofs. His mathematical text was still in use after 2000 years.
  • Archimedes invented the a water pump based on a helical screw, versions of which are still in use today. (The actual date of this invention is however disputed).
  • Eratosthenes measured the diameter of the Earth.
  • Hero invented the aeolipile, the first reaction turbine.
  • Claudius Ptolemy wrote the Almagest, the most influential scientific book about the nature of the Universe for 1,400 years.
  • Hypatia, the first woman scientist and mathematician invented the hydrometer, before she met her untimely end during Christian riots.

Alas the ancient library is no more. Four times it was devastated by fire, accidental or deliberate, during wars and riots and historians disagree about who were the major culprits, their motives and the extent of the damage in each case.

  • 48 B.C. Damage caused during the Roman conquest of Egypt by Julius Caesar
  • 272 A.D. An attack on Queen Zenobia of Palmyra by Roman Emperor Aurelian
  • 391 A.D. An edict of the Emperor Theodosius I made paganism illegal and Patriarch Theophilus of Alexandria ordered demolition of heathen temples. This was followed by Christian riots the same year and also in 415 A.D..
  • 639 A.D. The Muslim conquest of Alexandria by General Amr ibn al 'Aas leading the army of Caliph Omar of Baghdad.

But even without the wars, the delicate papyrus scrolls were apt to disintegrate with age and what was left of the library eventually succumbed to the ravages of major earthquakes in Crete in A.D. 365 and 1303 A.D. which caused tsunamis which in turn devastated Alexandria.


300 B.C. Greek mathematician Euclid of Alexandria (Circa 325-265 B.C.) a great organiser and logician, taught at the great Library of Alexandria and took the current mathematical knowledge of his day and organised it into a manuscript consisting of thirteen books now known as Euclid's Elements. Considered by many to be the greatest mathematics text book ever written it has been used for over 2000 years. Nine of these books deal with plane and solid geometry, three cover number theory, one (book 10) concerns incommeasurable lengths which we would now call irrational numbers.


Proof, Logic and Deductive Reasoning

The "Elements" were not just about geometry, Euclid's theorems and conclusions were backed up by rigorous proofs based on logic and deductive reasoning and he was one of the first to require that mathematical theories should be justified by such proofs.

An example of the type of deductive reasoning applied by Euclid is the logical step based on the logical principle that if premise A implies B, and A is true, then B is also true, a principle that mediaeval logicians called modus ponens (the way that affirms by affirming). A classical example of this is the conclusion drawn from the following two premises: A: "All men are mortal" and B: "Socrates is a man" then the conclusion C: "Socrates is mortal" is also true.

In this manner Euclid started with a small set of self evident axioms and postulates and used them to produce deductive proofs of many other new propositions and geometric theorems. He wrote about plane, solid and spherical geometry, perspective, conic sections, and number theory applying rigorous formal proofs and showed how these propositions fitted into a logical system. His axioms and proofs have been a useful set of tools for many subsequent generations of mathematicians, demonstrating how powerful and beneficial deductive reasoning can be.


An example of Euclid's logical deduction is the method of exhaustion which was used as a method of finding the area of an irregular shape by inscribing inside it a sequence of n regular polygons of known area whose total area converges to the area of the given containing shape. As n becomes very large, the difference in area between the given shape and the n polygons it contains will become very small. As this difference becomes ever smaller, the possible values for the area of the shape are systematically "exhausted" as the shape and the corresponding area of the series of polygons approaches the given shape. This sets a lower limit to the possible area of the shape.


The method of exhaustion used to find the area of the shape above is a special case of of proof by contradiction, known as reductio ad absurdum which seeks to demonstrate that a statement is true by showing that a false, untenable, or absurd result follows from its denial, or in turn to demonstrate that a statement is false by showing that a false, untenable, or absurd result follows from its acceptance.

In the case above this means finding the area of the shape by first comparing it to the area of a second region inside the shape (which can be "exhausted" so that its area becomes arbitrarily close to the true area). The proof involves assuming that the true area is less than the second area, and then proving that assertion false. This gives a lower limit for the area of the shape under consideration.

Then comparing the shape to the area of a third region outside of the shape and assuming that the true area is more than the third area, and proving that assertion is also false. This gives an upper limit for the area of the shape.


No original records of Euclid's work survive and the oldest surviving version of "The Elements" is a Byzantine manuscript written in 888 A.D.. Little is known of his life and the few historical references to Euclid which exist were written centuries after his death, by Greek mathematician Pappus of Alexandria around 320 A.D. and philosopher and historian Proclus around 450 A.D..

According to Proclus, when the ruler Ptolemy I Soter asked Euclid if there was a shorter road to learning geometry than through the Elements, Euclid responded "There is no royal road to geometry".


269 B.C. The greatest mathematician and engineer in antiquity, the Greek Archimedes of Syracuse (287-212 B.C.) began his formal studies at the age of eighteen when he was sent by his father, Phidias, a wealthy astronomer and kinsman of King Hieron II of Syracuse, to Egypt to study at the school founded by Euclid in the great Library of Alexandria. It kept him out of harm's way in the period leading up to the first Punic war (264-241 B.C.) between Carthage and Rome when Sicily was still a colony of Magna Graecia, vulnerably situated in strategic territory between the two adversaries. Syracuse initially supported Carthage, but early in the war Rome forced a treaty of alliance from king Hieron that called for Syracuse to pay tribute to the Romans. Returning to Syracuse in 263 B.C. Archimedes became a tutor to Gelon, the son of King Hieron.


Archimedes' Inventions

Archimedes was known as an inventor, but unlike the empirical designs of his predecessors, his inventions were the first to be based on sound engineering principles.

He was the world's first engineer, the first to be able to design levers, pulleys and gears with a given mechanical advantage thus founding the study of mechanics and the theory of machines.

Archimedes also founded the studies of statics and hydrostatics and was the first to elucidate the principle of buoyancy and to use it in practical applications.

  • Though he did not invent the lever, he explained its mechanical advantage, or leverage, in his work "On the Equilibrium of Planes" and is noted for his claim "Give me a place to stand and a long enough lever and I can move the Earth".
  • Archimedes' explanation of the theory of the lever is based on the principle of balancing the input and output torques about the fulcrum of the device so that, the input force multiplied by its distance from the fulcrum, is equal to the weight (or downward force) of the load multiplied by its distance from the fulcrum. In this arrangement, the distance moved by each force is proportional to its distance from the fulcrum. Thus a small force moving a long distance can lift a heavy load over a small distance and the mechanical advantage is equal to the ratio of the distances from the fulcrum of the points of application the input force and the output force. He applied similar reasoning to explain the operation of compound pulleys and gear trains, in the latter case using angular displacement in place of linear displacement.

    We would now relate this theory to the concepts of work done, potential energy and the conservation of energy. See also hydraulic, mechanical advantage described by Pascal.

  • He is credited by the Greek historian Plutarch (46-120 A.D.), with inventing the block and tackle / compound pulley to move ships and other heavy loads. The use of a simple, single-sheaved pulley to change the direction of the pull, for drawing water and lifting loads had been known for many years. This device did not provide any mechanical advantage, but Archimedes showed that a multi-sheaved, compound pulley could provide a mechanical advantage of n where n is the number of parts of the rope in the pulley mechanism which support the moving block. For example, a block and tackle system with three sheaves or pulley wheels in the upper block and two sheaves in the lower (suspended) block will have five sections of the rope supporting the load giving a mechanical advantage of five. Pulling the rope by five feet with a force of one pound will draw the pulley blocks one foot closer together, raising the load by one foot. The tension on the rope will be the same throughout its length, so that the five sections of the rope between the pulleys, together provide a combined lifting force of five pounds on the lower block. Thus the affect on the load is that the mechanism multiplies the force applied by five but divides the distance moved by five.
  • Similarly, Archimedes was familiar with gearing, which had been mentioned in the writings of Aristotle about wheel drives and windlasses around 330 B.C., and was able to calculate the mechanical advantage provided by the geared mechanisms of simple spur gears. Archimedes is however credited with the invention of the worm gear which not only provided much higher mechanical advantage, it also had the added advantage that the "worm", actually a helical screw, could easily rotate the gear wheel but the gear wheel could not easily, if at all, rotate the worm. This gave the mechanism a ratchet like, or braking, property such that heavy loads would not slip back if the input force was relaxed.
  • It is said that he invented a screw pump, known after him as the Archimedes' Screw, for raising water by means of a hollow wooden pipe containing a close fitting wooden, helical screw on a long shaft turned by a handle at one end. When the other end was placed in the water to be raised and the handle turned, water was carried up the tube by the screw and out at the top. However such devices probably predated Archimedes and were possibly used in the Hanging Gardens of Babylon. The Archimedes' Screw is still used today as a method of irrigation in some developing countries.
  • He also designed winches, windlasses and military machines including catapults, trebuchets and siege engines.
  • It is claimed by some that Archimedes invented the odometer but this is more likely to be the work of Vitruvius who described its working details.
  • Fanciful claims have also been made that he designed gear mechanisms for moving extremely heavy loads, an Iron Claw to lift ships out of the water causing them to break up and a Death Ray to set approaching ships on fire. See more about these claims below.

Archimedes' Mathematics

While Archimedes was famous for his inventions, his mathematical writings were equally important but less well known in antiquity. Mathematicians from Alexandria read and quoted him, but the first comprehensive compilation of his work was not made until Circa. 530 A.D. by Isidore of Miletus.

  • Archimedes was able to use infinitesimals in a way that is similar to modern integral calculus. Through proof by contradiction (reductio ad absurdum), he could give answers to problems to an arbitrary degree of accuracy, while specifying the limits within which the answer lay.
  • Though mathematicians had been aware for many years that the ratio π between the circumference and the diameter of a circle was a constant, there were wide variations in the estimations of its magnitude. Archimedes calculated its value to be 3.1418, the first reasonably accurate value of this constant.
  • He did it by using the method of exhaustion to calculate the circumference of a circle rather than the area and by dividing the circumference by the diameter he obtained the value of π. First he drew a regular hexagon inside a circle and computed the length of its perimeter. Then he improved the accuracy by progressively increasing the number of sides of the polygon and calculating the perimeter of the new polygon with each step. As the number of sides increases, it becomes a more accurate approximation of a circle. At the same time, by circumscribing the circle with a series of polygons outside of the circle, he was able to determine an upper limit for the perimeter of the circle. He found that with a 96 sided polygon the lower and upper limits of π calculated by his method were given by:

    223/71 < π < 22/7

    In modern decimal notation this converts to:

    3.1408 < π < 3.1428

    The value of π calculated by Archimedes is given by the average between the two limits and this is 3.1418 which is within 0.0002 of its true value of 3.1416.

  • More generally, Archimedes calculated the area under a curve by imagining it as a series of very thin rectangles and proving that the sum of the areas of all the rectangles gave a very close approximation to the area under the curve. Using the method of exhaustion he showed that the approximation was neither greater nor smaller than the area of the figure under consideration and therefore it must be equal to the true area. He was thus able to calculate the areas and volumes of different shapes and solids with curved sides. This method anticipated the methods of integral calculus introduced nearly 2000 years later by Gregory, Newton and Leibniz.
  • He was also able to calculate the sum of a geometric progression.
  • He proved that the area of a circle was equal to π multiplied by the square of the radius of the circle (πr2) and that the volume and surface area of sphere are 2/3 of a cylinder with the same height and diameter.
  • Thus he showed that the surface area A of a sphere with radius r is given by: A = 4 π r2 and the volume V of a sphere with radius r is given by: V = 4/3π r3 which he regarded as one of his proudest achievements.

  • He also developed fundamental theorems concerning the determination of the centre of gravity of plane figures.
  • In an attempt to calculate how many grains of sand it would take to fill the Universe, Archimedes devised a number system which he called the Sand Reckoner to represent the very large numbers involved. Based on the largest number then in use called the myriad equal to 10,000 he used the concept of a myriad-myriads equal to 108. He called the numbers up to 108 "first numbers" and called 108 itself the "unit of the second numbers". Multiples of this unit then became the second numbers, up to this unit taken a myriad-myriad times, 108·108=1016. This became the "unit of the third numbers", whose multiples were the third numbers, and so on so that the largest number became (108) raised to the power (108) which in turn is raised to the power (108).

Myths and Reality

As with many great men of antiquity, few if any, contemporary records of Archimedes works remain and his reputation has been embellished by historians writing about him many years after his death, or trashed by artists, ignorant of the scientific principles involved, attempting to illustrate his ideas. This is probably the case with four of the oft quoted anecdotes about his work.

  • It is claimed that Archimedes used a mirror or mirrors on the shore to focus the Sun's rays, the so called Death Rays onto attacking ships to destroy them by setting them on fire. (The Greeks had much more practical incendiary missiles available to them at the time and catapults to throw them long distances)

  • Similarly it is reported that Archimedes used his compound pulley system connected to an Iron Claw suspended from a beam to lift the prows of attacking ships out of the water causing them to break up or capsize and sink. (The ships would have to be almost on the beach, directly in front of the defensive claw, to be in range of these machines.)

  • He was also familiar with geared mechanisms and it was claimed by third century historian, Athenaeus, that Archimedes' systems of winches and pulleys would enable a few men to launch a huge boat into the sea or to carry it on land. These mechanisms were illustrated by Gian Maria Mazzucchelli in his 1737 biography of Archimedes. It is quite clear from the drawings that the wooden gear wheels would have been unable to transmit the power required and the tensile strength of the ropes employed is also questionable.

  • Over the years, in the absence of written records, other artists and illustrators have tried to depict Archimedes devices and mechanisms. Examples of how the artists have imagined these devices are shown in the page about Archimedes' Machines


  • The most widely known anecdote about Archimedes is the Eureka story told two centuries later by the Roman architect and engineer Vitruvius. According to Vitruvius, King Hieron II had supplied a pure gold ingot to a goldsmith charged with making a new crown. The new crown when delivered weighed the same as the ingot supplied but the King wanted Archimedes to determine whether the goldsmith had adulterated the gold by substituting a portion of silver. Archimedes was aware that silver is less dense than gold so he would be able to to determine whether some of the gold had been replaced by silver by checking the density. He had a balance to check the weight, but how could he determine the volume of an intricately designed crown without melting it down or otherwise damaging it?
  • While taking a bath, he noticed that the level of the water in the tub rose as he got in, and realised that this effect could be used to determine the volume of the crown. By immersing the crown in water, the volume of water displaced would equal the volume of the crown. If any of the gold had been replaced by silver or any other less dense metal, then the crown would displace more water than a similar weight of pure gold. EUREKA!!!. It was reported that Archimedes then took to the streets naked, so excited by his discovery that he had forgotten to dress, crying "Eureka!" (Greek: meaning "I have found it!").

    The test was conducted successfully, proving that silver had indeed been mixed in. There is no record of what happened to the goldsmith. It is claimed today that the change in volume would probably have been so small as to be undetectable by the apparatus available to Archimedes at the time.

    There is no question however that he devised a method of measuring the volume of irregularly shaped objects and also understood the principle of buoyancy and its use for comparing the density of the materials used in different objects, but the story of him running naked through the streets is probably apocryphal.


All of these stories probably contain a major element of truth and it would not be surprising that Archimedes was well aware of, and had publicised, the theoretical possibilities involved in these schemes, but whether they could have actually been successfully implemented with the available technology and materials of the day is open to question. The principles were correct but the scale and effectiveness of the devices described in biographies written hundreds of years later was doubtful. There is unfortunately no corroborating evidence to back up these later descriptions of the military exploits. If the naval siege defences had been so successful, why would they not have been subsequently adopted as standard practice and why did they not appear in historical accounts of the battles?


Death of Archimedes

By 215 B.C. Hostilities between Carthage and Rome flared up once more in the second Punic War and in 214 B.C. and Syracuse sided once more with the Carthaginians and so came under siege by the Romans under General Marcus Claudius Marcellus. Archimedes skills in designing military machines and mechanical devices were well known, even to the Romans, and were called upon in the defence of Syracuse during these hostilities.

Greek historian Plutarch (Circa 46-120 A.D.) gave two accounts of Archimedes' death in 212 B.C. when Roman forces eventually captured the city after a two year siege. The first describes how Archimedes was contemplating a mathematical problem on a diagram he had drawn in the dust on the ground when he was approached by a Roman soldier who commanded him to come and meet General Marcellus who considered the great inventor to be a valuable scientific asset who should not be harmed. But Archimedes declined, saying that he had to finish working on the problem. The soldier was enraged by this, and ran him through with his sword, much to the annoyance of Marcellus.

The second account explains that Archimedes was killed by a soldier while attempting to rob him of his valuable mathematical instruments.

Recent examination of all the accounts by both Carthaginian and Roman historians of the details of Archimedes' death have however reached a different conclusion. As we know, history is often written by the winners. The counter view is that Archimedes' death was the state-sponsored assassination of an enemy of Rome, a key player, whose inventions were vital to the defence of Syracuse. The nations were at war. Why would Archimedes be so oblivious to the danger he was in? Marcellus' feigned sorrow and anger after the event were a cover for his guilt at ordering the death of the World's greatest scientist at the time.


250 B.C. The Baghdad Battery - In 1936 several unusual earthenware jars, dating from about 250 B.C., were unearthed during archeological excavations at Khujut Rabu near Baghdad. A typical jar was 130 mm (5-1/2 inches) high and contained a copper cylinder, the bottom of which was capped by a copper disk and sealed with bitumen or asphalt. An iron rod was suspended from an asphalt stopper at the top of the copper cylinder into the centre of the cylinder. The rod showed evidence of having been corroded with an acidic agent such as wine or vinegar. 250 B.C. corresponds to the Parthian occupation of Mesopotamia (modern day Iraq) and the jars were held in Iraq's State Museum in Baghdad. (Baghdad was not founded until 762 A.D.). In 1938 they were examined by German archeologist Wilhelm König who concluded that they were Galvanic cells or batteries supposedly used for gilding silver by electroplating. A mysterious anachronism. Backing up his claim, König also found copper vases plated with silver dating from earlier periods in the Baghdad Museum and other evidence of (electro?)plated articles from Egypt. Since then, several replica batteries have been made using various electrolytes including copper sulphate and grape juice generating voltages from half a Volt to over one Volt and they have successfully been used to demonstrate the electroplating of silver with gold. One further, more recent, suggestion by Paul T. Keyser a specialist in Neat Eastern Studies from the University of Alberta is that the galvanic cells were used for analgesia. There is evidence that electric eels had been used to numb an area of pain, but quite how that worked with such a low voltage battery is not explained. Apart from that, no other compelling explanation of the purpose of these artifacts has been proposed and the enigma still remains.


Despite warnings about the safety of these priceless articles before the 2003 invasion of Iraq by the US, the UK, and their allies, they were plundered from the museum during the war and their whereabouts is now unknown.


A nice and oft repeated story but there is a counter view about their purpose.

The Parthians were nomadic a nomadic tribe of skilled warriors and not noted for their scientific achievements. The importance of such an unusual electrical phenomenon seems to have gone completely unrecorded within the Parthian and contemporary cultures and then to have been completely forgotten despite extensive historical records from the period.

There are also some features about the artifacts themselves which do not support the battery theory. The asphalt completely covers the copper cylinder, electrically insulating it so that no current could be drawn without modifying the design and no wires, conductors, or any other sort of electrical equipment associated with the artifacts have been found. Furthermore the asphalt seal forms a perfect seal for preventing leakage of the electrolyte but it would be extremely inconvenient for a primary galvanic cell which would require frequent replacement of the electrolyte. As an alternative explanation for these objects, it has been noted that they resemble storage vessels for sacred scrolls. It would not be at all surprising if any papyrus or parchment inside had completely rotted away, perhaps leaving a trace of slightly acidic organic residue.


240 B.C. Greek mathematician Eratosthenes (276-194 B.C.) of Cyrene (now called Shahhat, Libya), the third chief librarian at the Library of Alexandria and contemporary of Archimedes calculated the Circumference of the Earth. Considering the tools and knowledge available at the time, Eratosthenes results are truly brilliant. Equipped with only a stick, he did not even need to leave Alexandria to make this remarkable breakthrough. Not only did he know that the Earth was spherical, 1700 years before Columbus was born, he also knew how big it was to an accuracy within 1.5%. See Eratosthenes Method and Calculation.


He invented the discipline of geography including the terminology still used today and created the first map of the world incorporating parallels and meridians, (latitudes and longitudes) based on the available geographical knowledge of the era. He was also the first to calculate the tilt of the Earth's axis (again with remarkable accuracy) and he deduced that the calendar year was 365 1/4 days long and was first to suggest that every four years there should be a leap year of 366 days.


Eratosthenes also devised a way of finding prime numbers known as the sieve. Instead of using trial division to sequentially test each candidate number for divisibility by each prime which is a very slow process, his system marks as composite (i.e. not prime) the multiples of each prime, starting with the multiples of 2, then 3 and continues this iteratively so that they can be separated out. The multiples of a given prime are generated as a sequence of numbers starting from that prime, with constant difference between them which is equal to that prime.


220-206 B.C. The magnetic compass was invented by the Chinese during the Qin (Chin) Dynasty, named after China's first emperor Qin Shi Huang di, the man who built the wall. It was used by imperial magicians mostly for geomancy (Feng Shui and fortune telling) but the "Mighty Qin's" military commanders were supposed to be the first to use a lodestone as a compass for navigation. Chinese compasses point south.


See more about Chinese Inventions.


206 B.C. - 220 A.D. During the Han Dynasty, Chinese historian Ban Gu recorded in his Book of Han the existence of pools of "combustible water", most likely petroleum, in what is now China's Shaanxi province. During the same period, in Szechuan province, natural gas was also recovered from what they called "fire wells" by deep drilling up to several hundred feet using percussion drills with cast iron bits. These fuels were used for domestic heating and for extracting metals from their ores (pyrometallurgy), for breaking up rocks as well as for military incendiary weapons. The heavy oil was also distilled to produce paraffin (kerosene) for use in decorative oil lamps from the period which have been discovered.

Percussion drilling involves punching a hole into the ground by repeatedly raising and dropping a heavy chisel shaped tool bit into the bore hole to shatter the rock into small pieces which can be removed. The drill bit is raised by a cable and pulley system suspended from the top of a wooden tower called a derrick.

The fuels were later named in Chinese as shíyóu rock oil by Shen Kuo just as the word petroleum is derived from the latin petra rock and oleum oil.


It was over 2000 years before the first oil well was drilled by Edwin Drake in the USA and he used the same percussion drilling method as the Chinese.


See more about Chinese Inventions.


140 - 87 B.C. Paper was first produced in China in the second century B.C.. Made by pounding and disintegrating hemp fibres, rags and other plant fibres in water followed by drying on a flat mould, the paper was thick and coarse and surprisingly it was not used for writing but for clothing, wrapping, padding and personal hygiene. The oldest surviving piece of paper was found in a tomb near Xian and dates from between 140 B.C. to 87 B.C. and is inscribed with a map.

The first paper found with writing on it was discovered in the ruins of an ancient watch tower and dates from 105 A.D. The development of this finer paper suitable for writing is attributed to Cai Lun, a eunuch in the Imperial court during the Han dynasty (202 B.C. - A.D. 220).

Paper was an inexpensive new medium which provided a simple means of communicating accurately with others who were not present without the danger of "Chinese whispers" corrupting the message, but more importantly, it enabled knowledge to be spread to a wider population or recorded for use by future generations. A simple invention which, like the printing press, brought enormous benefits to society.


See more about Chinese Inventions.


27 B.C. - 5th Century A.D. The Roman Empire. The Romans were great plumbers but poor electricians.

The Romans were deservedly renowned for their civil engineering - buildings, roads, bridges, aqueducts, central heating and baths. Surprisingly however, in 500 years, they didn't advance significantly on the legacies of mathematics and scientific theories left to them by the Greeks. Fortunately, the works of the Greek philosophers and mathematicians were preserved by Arab scholars who translated them into Arabic.


Circa 15 B.C. Some time between 27 B.C and 15 B.C. Roman architect and military engineer, Marcus Vitruvius Pollio, completed "De Architectura" or "On Architecture: The Ten Books on Architecture". It is a comprehensive manual for architects covering the principles of architecture, education and training, town planning, environment, structures, building materials and construction methods, design requirements for buildings intended for different purposes, proportions, decorative styles, plans for houses, heating, acoustics, pigments, hydraulics, astronomy and a ranges of machinery and instruments.


His philosophies about architecture are summed up in the Vitruvian Virtues that a structure must exhibit the three qualities of firmitas, utilitas, venustas - meaning that it must be solid, useful and beautiful.


Included in Book 10 of the study are designs for military and hydraulic machines, including pulleys and hoists and designs for trebuchets, water wheels and armoured vehicles which have had an undeniable influence on the inventions of Leonardo da Vinci. See more about Vitruvius water wheels.

Amongst Vitruvius' designs are instructions for the design of an odometer which he called a "hodometer". It consisted of a cart with a separate, large wheel of known circumference mounted in a frame. The large wheel was connected through the intermediate gear wheel of a reduction gear mechanism to a horizontal disk with a series of holes around its rim each containing a small pebble. A single hole in the housing of the horizontal disk allowed a pebble to fall through into a container below when it arrived above the hole. As the cart was pushed along the ground, one pebble would fall into the container for each revolution of the intermediate gear wheel. The distance traveled could be calculated by counting the number of pebbles in the container and multiplying by the circumference of the large wheel and the gear ratio. Vitruvius also proposed a marine version of his device in which the distance was calculated from the rotation of paddles.

There are some who attribute the design of the odometer to Archimedes, but there is no strong evidence to support this.


Unfortunately none of the original illustrations from "De Architectura" have survived. Nevertheless the books have deeply influenced classical architects from the Renaissance through to the twentieth century. He was perhaps a little too influential though, through no fault of his own, since his style was so sublime that it captured public taste, stifling further innovation and generations of architects merely copied his ideas rather than developing alternative styles of their own.


Vitruvius has been called the world's first engineer to be known by name.


1 B.C.

1 A.D.


Circa 50 A.D. In the first century A.D. several spectacular aqueducts were built by Roman Engineers and though many of them are still standing and in some cases still in use, there are unfortunately no records of who actually designed and built them. Two which stand out are the Pont du Gard near Nimes in France, the other at Segovia in Spain.

(See pictures of these two Roman Aqueducts)


In the absence of records the design and construction of the Pont du Gard has been attributed to Marcus Agrippa, the adopted son-in-law of Emperor Augustus at around the year 19 B.C. However recent excavations and coins depicting the Emperor Claudius (41-54 A.D.) found at the site suggest that the construction may have taken place between 40 and 60 A.D. The aqueduct supplied Nimes with water and is nearly 30 miles (50 kilometres) long. The section over the river Gard has arches at three levels and is 900 feet (275 metres) long and 160 feet (49 metres) high. The top level contains a channel 6 feet (1.8 metres) high and 4 feet (1.2 metres) wide with a gradient of 0.4 per cent to carry the water. The bottom level carries a roadway. The three levels were built in dressed stone without mortar.


Some researchers have estimated that the Segovia aqueduct was started in the second half of the 1st Century A.D. and completed in the early years of the 2nd Century, during the reign of either Emperor Vespasian (69-79 A.D.) or Nerva (96-98 A.D.). Others have suggested it was started under Emperor Domitian (81-96 A.D) and probably completed under Trajan (98-117 A.D.). The aqueduct brought water to Segovia from the Frio River 10 miles (16 km) away. Its maximum height is 93 ft 6 in (28.5 metres), including nearly 19 ft 8 in (6 metres) of foundations and it is constructed from 44 double arches, 75 single arches and another four single arches giving a total of 167 arches. The bridge section of the aqueduct is 2240 feet (683 meters) long and changes direction several times. Like the Pont du Gard, it was built from dressed stone without mortar.


Circa 60 A.D. Greek mathematician Hero of Alexandria conceived the idea of a reaction turbine though he didn't call it that. He called it an Aeolipile (Aeolus - Greek God of the Wind) (Pila Latin - Ball) or the Sphere of Aeolus. It was a hollow sphere containing a small amount of water, free to rotate between two pivot points. When heated over a flame the steam from the boiling water escaped through two tangential nozzles in jets which caused the sphere to rotate at high speed. See diagram of Hero's Aeolipile.

Alternative designs show the water boiled in a separate chamber being fed through a hollow pipe into the sphere through one of the pivots.

It has been suggested that this device was used by priests to perform useful work such as opening temple doors and moving statues to impress gullible worshippers but no physical evidence remains and these ideas were never developed and the aeolipile remained as a toy.


Hero is also credited as being the first to propose a formal way of calculating square roots.


See more about Reaction Turbines.

See more about Steam Engines


150 A.D. Some time between 150 A.D. and 160 A.D. Greek astronomer and mathematician Claudius Ptolemaeus, Ptolemy a Roman citizen of Alexandria, (not one of the Ptolomaic Kings) published the Almagest "The Great Book". In it he summarised the all known information about astronomy and the mathematics which supported the theories. For over a thousand years it was the accepted explanation of the workings of the Universe. Unfortunately it was based on a geocentric model with uniform circular motions of the Sun and planets around the Earth. Where this ideal motion did not fit the observed movements, the anomalies were explained by the concept of equants with the planets moving in smaller epicyclic orbits superimposed on the major orbit. It was not until Copernicus came along 1400 years later that Ptolemy's theory was seriously challenged. The Almagest was however a major source of information about Greek trigonometry.

In a similar vein to the Almagest, Ptolemy also published Geographia which summarised all that was known at the time about the World's geography as well as the projections used to create more accurate maps.


200 Greek philosopher Claudius Galen from Pergamum, Asia Minor, physician to five Roman emperors and surgeon to the Roman gladiators, was the first of many to claim therapeutic powers of magnets and to use them in his treatments. Galen carried out controlled experiments to support his theories and was the first to conclude that mental actively occurred in the brain rather than the heart, as Aristotle had suggested. Like many ancient philosophers his authority was virtually undisputed for many years after his death, thus discouraging original investigation and hampering medical progress until the 16th century.

But see Vesalius.


400 Greek scholar Hypatia of Alexandria took up her position as head of the Platonist school at the great Library of Alexandria, (in the period between its third and its fourth and final sacking), where she taught mathematics, astronomy and philosophy. The first recorded woman in science, she is considered to be the inventor of the hydrometer, called the aerometer by the Greeks. Claims that she also invented the planar astrolabe are probably not true since there is evidence that the astrolabe dates from 200 years earlier, but her mathematician father Theon of Alexandria had written a treatise on the device and she no doubt lectured about its use for calculating the positions of the Sun, Moon and stars.


Hypatia still held pagan beliefs at a time when the influence of Christianity was beginning to grow and unfortunately her science teachings were equated with the promotion of paganism. In 415 she was attacked by a Christian mob who stripped her, dragged her through the streets, killed her and cut her to pieces using oyster shells. Judging from her appearance as depicted by Victorian artists, it's no surprise that the local monks were outraged. See Hypatia 1885 by Charles William Mitchell.


426 Electric and magnetic phenomena were investigated by St Augustine who is said to have been "thunderstruck" on witnessing a magnet lift a chain of rings. In his book "City of God" he uses the example of magnetic phenomena to defend the idea of miracles. Magnetism could not be explained but it manifestly existed, so miracles should not be dismissed just because they could not be explained.


619 In 1999, archaeologists at Nendrum on Mahee Island in Ireland investigating what they thought to be a stone tidal pond used for catching fish uncovered two stone built tidal mills with a millstones and paddle blades dating from 619 A.D. and 787 AD. Several tidal mills were built during the Roman occupation of England for grinding grain and corn. They operated by storing water behind a dam during high tide, and letting it out to power the mill after the tide had receded and were the forerunners of the modern schemes for capturing tidal energy.


645 Xuan Zhuang the great apostle of Chinese Buddhism returned to China from India with Buddhist images and more than 650 Sanskrit Buddhist scriptures which were reproduced in large quantities giving impetus to the refinement of traditional methods of printing using stencils and inked squeezes first used by the Egyptians. A pattern of rows of tiny dots was made in a sheet of paper which was pressed down on top of a blank sheet and ink was forced through the holes. Later stencils developed by the Chinese and Japanese used human hair or silk thread to tie delicate isolated parts into the general pattern but there was no fabric backing to hold the whole image together. The stencil image was printed using a large soft brush, which did not damage the delicate paper pattern or the fine ties. These printing techniques of composite inked squeezes and stencils foreshadowed modern silk screen printing which was not patented until 1907.


700 - 1100 Islamic Science During Roman times, the flame of Greek science was maintained by Arab scholars who translated Greek scientific works into Arabic. From 700 A.D. however, when most of Europe was still in the Dark Ages, scientific developments were carried forward on a broad front by the Muslim world with advances in astronomy, mathematics, physics, chemistry and medicine. Chemistry (Arabic Al Khimiya "pour together", "weld") was indeed the invention of the Muslims who carried out pioneering work over three centuries putting chemistry to practical uses in the refinement of metals, dyeing, glass making and medicine. In those days the notion of alchemy also included what we would today call chemistry. Among the many notable muslim scientists from this period were Jabir Ibn Haiyan, Al-Khawarizmi and Al-Razi.

By the tenth century however, according to historian Toby Huff, the preeminence of Islamic science began to wane. It had flourished in the previous three centuries while Muslims were in the minority in the Islamic regions however, starting in the tenth century, widespread conversion to Islam took place and as the influence of Islam increased, so the tolerance of alternative educational and professional institutions and the radical ideas of freethinkers decreased. They were dealt a further blow in 1485, thirty five years after the invention of the printing press, when the Ottoman Sultan Byazid II issued an order forbidding the printing of Arabic letters by machines. Arabic texts had to be translated into Latin for publication and this no doubt hampered both the spread of Islamic science and ideas as well as the influence of the outside world on the Islamic community. This prohibition of printing was strictly enforced by subsequent Ottoman rulers until 1728 when the first printing press was established in Istanbul but due to objections on religious grounds it closed down in 1742 and the first Koran was not printed in Istanbul until 1875. Meanwhile in 1734 Deacon Abdalla Zakhir of the Greek Catholic Maronite Monastery of Saint John Sabigh in the Lebanon managed to establish the first independent Arabic printing press.


Islam was not alone in banning the dissemination of subversive or inconvenient ideas. Henry VIII in 1529, aware of the power of the press, became the first monarch to publish a list of banned books though he did not go so far as banning printing. He was later joined by others. In 1632 Galileo's book "Dialogue Concerning the Two Chief World Systems", in which he asserted that the Earth revolved around the Sun rather than the other way round, was placed by Pope Urban VIII on the index of banned books and Galileo was placed under house arrest. Despite these setbacks, European scientific institutions overcame the challenges by the church, taking over the flame carried by the Arabs and the sixteenth and seventeenth centuries became the age of Scientific Revolution in Europe.


776 Persian chemist Abu Musa Jabir Ibn Haiyan (721-815), also known as Geber, was the first to put chemistry on a scientific footing, laying great emphasis on the importance of formal experimentation. In the period around 776 A.D. he perfected the techniques of crystallisation, distillation, calcination, sublimation and evaporation and developed several instruments including the alembic (Arabic al-ambiq, "still") which simplified the process of distillation, for carrying them out. He isolated or prepared several chemical compounds for the first time, notably nitric, hydrochloric, citric and tartaric acids and published a series of books describing his work which were used as classic works on alchemy until the fourteenth century. Unfortunately the books were added to, under Geber's name, by various translators in the intervening period leading to some confusion about the extent of Geber's original work.


830 Around the year 830, Baghdad born mathematician Mohammad Bin Musa Al-Khawarizmi (770-840) published "The Compendium Book on Calculation by Completion and Balancing" in which he introduced the principles of algebra (Arabic Al-jabr "the reduction" i.e. of complicated relationships to a simpler language of symbols) which he developed for solving linear and quadratic equations. He also introduced the decimal system of Hindu-Arabic numerals to Europe as well as the concept of zero, a mathematical device at the time unknown in Europe used to Roman numerals. Al-Khawarizmi also constructed trigonometric tables for calculating the sine functions. The word algorithm (algorizm) is named after him.


850 Historian of Chinese inventions, Joseph Needham, identified 850 as the date of the first appearance of what the Chinese called the "fire chemical" or what we would now call gunpowder. Around that year, a book attributed to Chinese alchemist Cheng Yin warns of the dangerous incendiary nature of mixtures containing saltpetre (Potassium nitrate), and Sulphur, both essential components of gunpowder. Such chemicals mixed with various other substances including carbonaceous materials and Arsenic had been used in various concentrations by alchemists since around 300 A.D. when Ko Hung proposed these mixtures in recipes for transforming lead into Gold and Mercury into Silver while others later used them in attempts to create a potion of immortality.


After Cheng Yin's warning, similar mixtures were soon developed to produce flares and fireworks as well as military ordnance including burning bombs and fuses to ignite flame throwers burning petrol (gasoline). The first example of a primitive gun called a "fire arrow" appeared in 905, and in 994, arrows tipped with burning "fire chemicals" were used to besiege the city of Tzu-t'ung.

Most of these military applications were merely incendiary devices rather than explosives since they did not yet contain enough saltpetre (75%) to detonate. It was not until 1040 that the full power of the saltpetre rich mixture was discovered and the first true formula for gunpowder was published by Tseng Kung-Liang. After that, true explosive devices were developed including cannon and hand grenades and land mines.


Around 1150 it was realised that an arrow could be made to fly without the need for a bow by attaching to the shaft, a bamboo tube packed with a burning gunpowder mix. This led to the development of the rocket which was born when larger projectiles were constructed from the bamboo sticks alone without the arrows. A text from around that time describes how the combustion efficiency and hence the rocket thrust could be improved by creating a cavity in the propellant along the centre line of the rocket tube to maximise the burning surface - a technique still used in solid fuelled rockets today.


In 1221 Chinese chronicler Chao Yu-Jung recorded the first use of bombs which we would recognise today, with cast Iron casings packed with explosives, which created deadly flying shrapnel when they exploded. They were used to great effect by a special catapult unit in Genghis Khan's Mongol army and by the Chinese Jin forces to defeat their Song enemies in the 1226 siege of Kaifeng.


See more about Nobel and Explosives.


920 Around the year 920, Persian chemist Mohammad Ibn Zakariya Al-Razi (865-925), known in the West as Rhazes, carried on Geber's work and prepared sulphuric acid, the "work horse" of modern chemistry and a vital component in the world's most common battery. He also prepared ethanol, which was used for medicinal applications, and described how to prepare alkali (Al-Qali, the salt work ashes, potash) from oak ashes. Al-Razi published his work on alchemy in his "Book of Secrets". The precise amounts of the substances he specified in his recipes demonstrates an understanding of what we would now call stoichiometry.


Several more words for chemicals are derived from their Arabic roots including alcohol (Al Kuhl" "essence", usually referring to ethanol) as well as arsenic and borax.


1000


1040 Thermoremanent magnetisation described in the Wu Ching Tsung Yao "Compendium of Military Technology" in China. Compass needles were made by heating a thin piece of iron, often in the shape of a fish, to a temperature above the Curie Point then cooling it in line with the Earth's magnetic field.


1041 Between 1041 and 1048 Chinese craftsman Pi Sheng produced the first printing press to use moveable type. Although his designs achieved widespread use in China, it was another four hundred years before the printing press was "invented" by Johann Gutenberg in Europe.


See more about Chinese Inventions.


1086 During the Song Dynasty (960-1127), Chinese astronomer, cartographer and mathematician Shen Kuo, in his Dream Pool Essays, describes the compass and its use for navigation and cartography as well as China's petroleum extraction and Pi Sheng's printing technique.


See more about Chinese Inventions.


1190 The magnetic compass "invented" in Europe 1400 years after the Chinese. Described for the first time in the west by a St Albans monk Alexander Neckam in his treatise De Naturis Rerum.


1250s Italian theologian St Thomas Aquinas stands up for the cause of "reason" reconciling the philosophy of Aristotle with Christian doctrine. Challenging Aristotle now became a challenge to the Church.


See also the Scientific Revolution


1269 Petrus Peregrinus de Marincourt, (Peter the Pilgrim) a French Crusader, used a compass to map the magnetic field of a lodestone. He discovered that a magnet had two magnetic poles, North and South and was the first to describe the phenomena of attraction and repulsion. He also speculated that these forces could be harnessed in a machine.


1285 The earliest record of a mechanical clock with an escapement or timing control mechanism is a reference to a payment to a clock keeper at (the original) St. Paul's in London. The invention of the verge and foliot escapement was an important breakthrough in measuring the passage of time allowing the development of mechanical timepieces.

The name verge comes from the Latin virga, meaning stick or rod. (See picture and explanation of the Verge Escapement)


The inventor of the verge escapement is not known but we know that it dates from 13th century Europe, where it was first used in large tower clocks which were built in town squares and cathedrals. The earliest recorded description of an escapement is in Richard of Wallingford's 1327 manuscript Tractatus Horologii Astronomici on the clock he built at the Abbey of St. Albans. It was not a verge, but a more complex variation.

For over 200 years the verge was the only escapement used in mechanical clocks until alternative escapements started to appear in the 16th century and it was 350 years before the more accurate pendulum clock was invented by Huygens.


1350 Around this time the first blast furnaces for smelting iron from its ore begin to appear in Europe, 1800 years after the Chinese were using the technique.


See more about Cast Iron and Steel.


1368-1644 China's Ming dynasty. When the Ming dynasty came into power, China was the most advanced nation on Earth. During the Dark Ages in Europe, China had already developed cast iron, the compass, gunpowder, rockets, paper, paper money, canals and locks, block printing and moveable type, porcelain, pasta and even "variolation" a precursor to vaccination as well as many other inventions centuries before they were "invented" by the Europeans. From the first century B.C. they had also been using deep drilling to extract petroleum from the underlying rocks. They were so far ahead of Europe that when Marco Polo described these wondrous inventions in 1295 on his return to Venice from China he was branded a liar. China's innovation was based on practical inventions founded on empirical studies, but their inventiveness seems to have deserted them during the latter part of the dynasty and subsequently during the Qing (Ching) dynasty (1644 - 1911). China never developed a theoretical science base and both the Western scientific and industrial revolutions passed China by. Why should this be?


It is said that the answer lies in Chinese culture, to some extent Confucianism but particularly Daoism (Taoism) whose teachings promoted harmony with nature whereas Western aspirations were the control of nature. However these conditions existed before the Ming when China's innovation led the world. A more likely explanation can be found in China's imperial political system in which a massive society was rigidly controlled by all-powerful emperors through a relatively small cadre of professional administrators (Mandarins) whose qualifications were narrowly based on their knowledge of Confucian ideals. If the emperor was interested in something, it happened, if he wasn't, it didn't happen.

The turning point in China's technological dominance came when the Ming emperor Xuande came to power in 1426. Admiral Zheng He, a muslim eunuch, castrated as a boy when the Chinese conquered his tribe, had recently completed an audacious voyage of exploration on behalf of a previous Ming emperor Yongle to assert China's control of all of the known world and to extract tributary from its intended subjects. But his new master considered the benefits did not justify the huge expense of Zheng's fleet of 62 enormous nine masted junks and 225 smaller supply ships with their 27,000 crew. The emperor mothballed the fleet and henceforth forbade the construction of any ships with more than two masts, curbing China's aspirations as a maritime power and putting an end to its expansionist goals, a xenophobic policy which has lasted until modern times.

The result was that during both the Ming and the Qing dynasties a succession of complacent, conservative emperors cocooned in prodigious, obscene wealth, remote even from their own subjects, lived in complete isolation and ignorance of the rest of the world. Foreign influences, new ideas, and an independent merchant class who sponsored them, threatened their power and were consequently suppressed. By contrast the West was populated by smaller, diverse and independent nations competing with each other. Merchant classes were encouraged and innovation flourished as each struggled to gain competitive or military advantage.


Times have changed. Currently China is producing two million graduates per year, sixty percent of which are in science and technology subjects, three times as many as in the USA.

After Japan, China is the second largest battery producer in the world and growing fast.


1450 German goldsmith and calligrapher Johann Genstleisch zum Gutenberg from Mainz invented the printing press, considered to be one of the most important inventions in human history. For the first time knowledge and ideas could be recorded and disseminated to a much wider public than had previously been possible using hand written texts and its use spread rapidly throughout Europe. Intellectual life was no longer the exclusive domain of the church and the court and an era of enlightenment was ushered in with science, literature, religious and political texts becoming available to the masses who in turn had the facility to publish their own views challenging the status quo. It was the ability to publish and spread one's ideas that enabled the Scientific Revolution to happen. Nowadays the Internet is bringing about a similar revolution.


Although it was new to Europe, the Chinese had already invented printing with moveable type four hundred years earlier but, because of China's isolation, these developments never reached Europe.


Gutenberg printed Bibles and supported himself by printing indulgences, slips of paper sold by the Catholic Church to secure remission of the temporal punishments in Purgatory for sins committed in this life. He was a poor businessman and made little money from his printing system and depended on subsidies from the Archbishop of Mainz. Because he spent what little money he had on alcohol, the Archbishop arranged for him to be paid in food and lodging, instead of cash. Gutenberg died penniless in 1468.


1474 The first patent law, a statute issued by the Republic of Venice, provided for the grant of exclusive rights for limited periods to the makers of inventions. It was a law designed more to protect the economy of the state than the rights of the inventor since, as the result of its declining naval power, Venice was changing its focus from trading to manufacturing. The Republic required to be informed of all new and inventive devices, once they had been put into practice, so that they could take action against potential infringers.


1478 After 10 years working as an apprentice and assistant to successful Florentine artist Andrea del Verrocchio at the court of Lorenzo de Medici in Florence, at the age of 26, Leonardo da Vinci left the studio and began to accept commissions on his own.

One of the most brilliant minds of the Italian Renaissance, Leonardo was hugely talented as an artist and sculptor but also immensely creative as an engineer, scientist and inventor. The fame of his surviving paintings has meant that he has been regarded primarily as an artist, but his scientific insights were far ahead of their time. He investigated anatomy, geology, botany, hydraulics, acoustics, optics, mathematics, meteorology, and mechanics and his inventions included military machines, flying machines, and numerous hydraulic and mechanical devices.


He lived in an age of political in-fighting and intrigue between the independent Italian states of Rome, Milan, Florence, Venice and Naples as well as lesser players Genoa, Siena, and Mantua ever threatening to degenerate into all out war, in addition to threats of invasion from France. In those turbulent times da Vinci produced a series of drawings depicting possible weapons of war during his first two years as an independent. Thus began a lifelong fascination with military machines and mechanical devices which became an important part of his expanding portfolio and the basis for many of his offers to potential patrons, the heads of these belligerent, or fearful, independent states.

Despite his continuing interest in war machines, he claimed he was not a war monger and he recorded several times in his notebooks his discomfort with designing killing machines. Nevertheless, he actively solicited such commissions because by then he had his own pupils and needed the money to pay them.


Most of Leonardo's designs were not constructed in his lifetime and we only know about them through the many models he made but mostly from the 13,000 pages of notes and diagrams he made in which he recorded his scientific observations and sketched ideas for future paintings, architecture, and inventions. Unlike academics today who rush into publication, he never published any of his scientific works, fearing that others would steal his ideas. Patent law was still in its infancy and difficult, if not impossible, to enforce. Such was his paranoia about plagiarism that he even wrote all of his notes, back to front, in mirror writing, sometimes also in code, so he could keep his ideas private. He was not however concerned about keeping the notes secret after his death and in his will he left all his manuscripts, drawings, instruments and tools to his loyal pupil, Francesco Melzi with no objection to their publication. Melzi expected to catalogue and publish all of Leonardo's works but he was overwhelmed by the task, even with the help of two full-time scribes, and left only one incomplete volume, "Trattato della Pintura" or "Treatise on Painting", about Leonardo's paintings before he himself died in 1570. On his death the notes were inherited by his son Orazio who had no particular interest in the works and eventually sections of the notes were sold off piecemeal to treasure seekers and private collectors who were interested more in Leonardo's art rather than his science.


Because of his secrecy, his contemporaries knew nothing of his scientific works which consequently had no influence on the scientific revolution which was just beginning to stir. It was about two centuries before the public and the scientific community began gradually to get access to Leonardo's scientific notes when some collectors belatedly allowed them to be published or when they ended up on public display in museums where they became the inspiration for generations of inventors. Unfortunately, only 7000 pages are known to survive and over 6000 pages of these priceless notebooks have been lost forever. Who knows what wisdom they may have contained?


Leonardo da Vinci is now remembered as both "Leonardo the Artist" and "Leonardo the Scientist" but perhaps "Leonardo the Inventor" would be more apt as we shall see below.


Leonardo the Artist

It would not do justice to Leonardo to mention only his scientific achievements without mentioning his talent as a painter. His true genius was not as a scientist or an artist, but as a combination of the two: an "artist-engineer".

He did not sign his paintings and only 24 of his paintings are known to exist plus a further 6 paintings whose authentication is disputed. He did however make hundreds of drawings most of which were contained in his copious notes.

  • The "Treatise on Painting"
  • This was the volume of Leonardo's manuscripts transcribed and compiled by Melzi. The engravings needed for reproducing Leonardo's original drawings were made by another famous painter, Nicolas Poussin. As the title suggests it was intended as technical manual for artists however it does contain some scientific notes about light, shade and optics in so far as they affect art and painting. For the same reason it also contains a small section of Leonardo's scientific works about anatomy. The publication of this volume in 1651 was the first time examples of the contents of Leonardo's notebooks were revealed to the world but it was 132 years after his death. The full range of his "known" scientific work was only made public little by little many years later.


Leonardo was one of the world's greatest artists, the few paintings he made were unsurpassed and his draughtsmanship had a photographic quality. Just seven examples of his well known artworks are mentioned here.

  • Paintings
    • The "Adoration of the Magi" painted in 1481.
    • The "Virgin of the Rocks" painted in 1483.
    • "The Last Supper" a large mural 29 feet long by 15 feet high (8.8 m x 4.6 m) started in 1495 which took him three years to complete.
    • The "Mona Lisa" (La Gioconda) painted in 1503.
    • "John the Baptist" painted in 1515.
  • Drawings
    • The "Vitruvian Man" as described by the Roman architect Vitruvius was drawn in 1490, showing the correlation between the proportions of the ideal human body with geometry, linking art and science in a single work.
    • Illustrations for mathematician Fra Luca Pacioli's book "De divina proportione" (The Divine Proportion), drawn in 1496. See more about The Divine Proportion.

Leonardo the Scientist

The following are some examples of the extraordinary breadth of da Vinci's scientific works

  • Military Machines
  • After serving his apprenticeship with Verrocchio, Leonardo had a continuous flow of military commissions throughout his working life.

    In 1481 he wrote to Ludovico Sforza, Duke of Milan with a detailed C. V. of his military engineering skills, offering his services as military engineer, architect and sculptor and was appointed by him the following year. In 1502 the ruthless and murderous Cesare Borgia, illegitimate son of Pope Alexander VI and seducer of his own younger sister (Lucrezia Borgia), appointed Leonardo as military engineer to his court where he became friends with Niccolo Machiavelli, Borgia's influential advisor. In 1507 some time after France had invaded and occupied Milan he accepted the post of painter and engineer to King Louis XII of France in Milan and finally in 1517 he moved to France at the invitation of King Francoise I to take up the post of First Painter, Engineer and Architect of the King. These commissions gave Leonardo ample scope to develop his interest in military machines.


    Leonardo designed war machines for both offensive and defensive use. They were designed to provide mobility and flexibility on the battlefield which he believed was crucial to victory. He also designed machines to use gunpowder which was still in its infancy in the fifteenth century.


    His military inventions included:

    • Mobile bridges including drawbridges and a swing bridge for crossing moats, ditches and rivers. His swing bridge was a cantilever design with a pivot on the river bank a counterweight to facilitate manoeuvring the span over the river. It also had wheels and a rope-and-pulley system which enabled easy transport and quick deployment.
    • Siege machines for storming walls.
    • Chariots with scythes mounted on the sides to cut down enemy troops.
    • A giant crossbow intended to fire large explosive projectiles several hundred yards.
    • Trebuchets - Very large catapults, based on releasing mechanical counterweights, for flinging heavy projectiles into enemy fortifications.
    • Bombards - Short barrelled, large-calibre, muzzle-loading, heavy siege cannon or mortars, fired by gunpowder and used for throwing heavy stone balls. The modern replacement for the trebuchet. Leonardo's design had adjustable elevation. He also envisaged exploding cannonballs, made up from several smaller stone cannonballs sewn into spherical leather sacks and designed to injure and kill many enemies at one time. We would now call these cluster bombs.
    • Springalds - Smaller, more versatile cannon, for throwing stones or Greek fire, with variable azimuth and elevation adjustment so that they could be aimed more precisely.
    • A series of guns and cannons with multiple barrels. The forerunners of machine guns.
    • They included a triple barrelled cannon and an eight barrelled gun with eight muskets mounted side by side as well as a 33 barrelled version with three banks of eleven muskets designed to enable one set of eleven guns to be fired while a second set cooled off and a third set was being reloaded. The banks were arranged in the form of a triangle with a shaft passing through the middle so that the banks could be rotated to bring the loaded set to the top where it could be fired again.

    • A four wheeled armoured tank with a heavy protective cover reinforced with metal plates similar to a turtle or tortoise shell with 36 large fixed cannons protruding from underneath. Inside a crew of eight men operating cranks geared to the wheels would drive the tank into battle. The drawing in Leonardo's notebook contains a curious flaw since the gearing would cause the front wheels to move in the opposite direction from the rear wheels. If the tank was built as drawn, it would have been unable to move. It is possible that this simple error would have escaped Leonardo's inventive mind but it is also suggested that like his coded notes, it was a deliberate fault introduced to confuse potential plagiarists. The idea that this armoured tank loaded with 36 heavy cannons in such a confined space could be both operated and manoeuvred by eight men is questionable.
    • Automatic igniting device for firearms.
  • Marine Warfare Machines and Devices
  • Leonardo also designed machines for naval warfare including:

    • Designs for a peddle driven paddle boat. The forerunner of the modern pedalo.
    • Hand flippers and floats for walking on water.
    • Diving suit to enable enemy vessels to be attacked from beneath the water's surface by divers cutting holes below the boat's water line. It consisted of a leather diving suit equipped with a bag-like helmet fitting over the diver's head. Air was supplied to the diver by means of two cane tubes attached to the headgear which led up to a cork diving bell floating on the surface.
    • A double hulled ship which could survive the exterior skin being pierced by ramming or underwater attack, a safety feature which was eventually adopted in the nineteenth century.
    • An armoured battleship similar to the armoured tank which could ram and sink enemy ships.
    • Barrage cannon - a large floating circular platform with 16 canons mounted around its periphery. It was powered and steered by two operators turning drive wheels geared to a large central drive wheel connected to paddles for propelling it through the water. Others operators fired the cannons.
  • Flying Machines
  • Leonardo studied the flight of birds and after the legendary Icarus was one of the first to attempt to design human powered flying machines, recording his ideas in numerous drawings. A step up from Chinese kites.

    His drawings included:

    • A design for a parachute. The world's first.
    • Various gliders
    • Designs for wings intended to carry a man aloft, similar to scaled up bat wings.
    • Human powered flying machines known as ornithopters, (from Greek ornithos "bird" and pteron "wing"), based on flapping wings operated by means of levers and cables.
    • A helical air screw with its central shaft powered by a circular human treadmill intended to lift off and fly like a modern helicopter.
  • Civil Works
  • Leonardo designed many civil works for his patrons and also the equipment to carry them out.

    These included:

    • A crane for excavating canals, a dredger and lock gates designed with swinging gates rather than the lifting doors of the "portcullis" or "guillotine" designs which were typically used at the time. Leonardo's gates also contained smaller hatches to control the rate of filling the lock to avoid swamping the boats.
    • Water lifting devices based on the Archimedes screw and on water wheels
    • Water wheels for powering mechanical devices and machines.
    • Architecture: Leonardo made many designs for buildings, particularly cathedrals and military structures, but none of them were ever built.
    • When Milan, with a population of 200,000 living in crowded conditions, was beset by bubonic plague Leonardo set about designing an a more healthy and pleasant ideal city. It was to be built on two levels with the upper level reserved for the householders with living quarters for servants and facilities for deliveries on the lower level. The lower level would also be served by covered carriageways and canals for drainage and to carry away sewage while the residents of the upper layer would live in more tranquil, airy conditions above all this with pedestrian walkways and gardens connecting their buildings.
    • Leonardo produced a precision map of Imola, accurate to a few feet (about 1 m) based on measurements made with two variants of an odometer or what we would call today a surveyor's wheel which he designed and which he called a cyclometer. They were wheelbarrow-like carts with geared mechanisms on the axles to count the revolutions of the wheels from which the distance could be determined. He followed up with physical maps of other regions in Italy.
  • Tools and Instruments
  • The following are examples of some of the tools and scientific instruments designed by da Vinci which were found in his notes.

    • Solar Heating - In 1515 when he worked at the Vatican, Leonardo designed a system of harnessing solar energy using a large concave mirror, constructed from several smaller mirrors soldered together, to focus the Sun's rays to heat water.
    • Improvements to the printing press to simplify its operation so that it could be operated by a single worker.
    • Anemometer - It consisted of a horizontal bar from which was suspended a rectangular piece of wood by means of a hinge. The horizontal bar was mounted on two curved supports on which a scale to measure the rotation of the suspended wood was marked. When the wind blew, the wood swung on its hinge within the frame and the extent of the rotation was noted on the scale which gave an indication of the force of the wind.
    • A 13 digit decimal counting machine - Based on a gear train and often incorrectly identified as a mechanical calculator.
    • Clock - Leonardo was one of the early users of springs rather than weights to drive the clock and to incorporate the fusée mechanism, a cone-shaped pulley with a helical groove around it which compensated for the diminishing force from the spring as it unwound. His design had two separate mechanisms, one for minutes and one for hours as well as an indication of phases of the moon.
    • He also designed numerous machines to facilitate manufacturing including a water powered mechanical saw, horizontal and vertical drilling machines, spring making machines, machines for grinding convex lenses, machines for grinding concave mirrors, file cutting machines, textile finishing machines, a device for making sequins, rope making machines, lifting hoists, gears, cranks and ball bearings.
    • Though drawings and models exist, the claim that Leonardo invented the bicycle is thought by many to be a hoax. The rigid frame had no steering mechanism and it is impossible to ride.
  • Theatrical Designs
    • Leonardo was often in demand for designing theatrical sets and decorations for carnivals and court weddings.
    • He also built automata in the form of robots or animated beasts whose lifelike movements were created by a series of springs, wires, cables and pulleys.
    • His self propelled cart, powered by a spring, was used to amaze theatre audiences.
    • He designed musical instruments including a lyre, a mechanical drum, and a viola organista with a keyboard. This latter instrument consisted of a series of strings each tuned to a different pitch. A bow in the form of a continuously rotating loop perpendicular to the strings was stretched between two pulleys mounted in front of the strings. The keys on the keyboard were each associated with a particular string and when a key was pressed a mechanism pushed the bow against the corresponding string to play the note.
  • Anatomy
  • As part of his training in Veroccio's studio, like any artist, Leonardo studied anatomy as an aid to figure drawing, however starting around 1487 and later with the doctor Marcantonio della Torre he made much more in depth studies of the body, its organs and how they function.

    • During his studies Leonardo had access to 30 corpses which he dissected, removing their skin, unravelling intestines and making over 200 accurate drawings their organs and body parts.
    • He made similar studies of other animals, dissecting cows, birds, monkeys, bears, and frogs, and comparing their anatomical structure with that of humans.
    • He also observed and tried to comprehend the workings of the cardiovascular, respiratory, digestive, reproductive and nervous systems and the brain without much success. He did however witness the killing of a pig during a visit to an abattoir. He noticed that when a skewer was thrust into its heart, that the beat of the heart coincided with the movement of blood into the main arteries. He understood the mechanism of the heart if not the function, predating by over 100 years, the conclusions of Harvey about its function.

    Because the bulk of his work was not published for over 200 years, his observations could possibly have prompted an earlier advance in medical science had they been made available during his lifetime. At least his drawings provided a useful resource for future students of anatomy.

  • Scientific Writings
  • Leonardo had an insatiable curiosity about both nature and science and made extensive observations which were recorded in his notebooks.

    They included:

    • Anatomy, biology, botany, hydraulics, mechanics, ballistics, optics, acoustics, geology, fossils

    He did not however develop any new scientific theories or laws. Instead he used the knowledge gained from his observations to improve his skills as an artist and to invent a constant stream of useful machines and devices.


"Leonardo the Inventor"

Leonardo unquestionably had one of the greatest inventive minds of all time, but very few of his designs were ever constructed at the time. The reason normally given is that the technology didn't exist during his lifetime. With his skilled draughtsmanship, Leonardo's designs looked great on paper but in reality many of them would not actually work in practice, an essential criterion for any successful invention, and this has since been borne out by subsequent attempts to construct the devices as described in his plans. This should not however detract in any way from Leonardo's reputation as an inventor. His innovations were way ahead of their time, unique, wide ranging and based on sound engineering principles. What was missing was the science.


At least he had the benefits of Archimedes' knowledge of levers, pulleys and gears, all of which he used extensively, but that was the limit of available science.

Newton's Laws of Motion were not published until two centuries after Leonardo was working on his designs. The science of strength of materials was also unheard of until Newton's time when Hooke made some initial observations about stress and strain and there was certainly no data available to Leonardo about the engineering properties of materials such as tensile, compressive, bending and impact strength or air pressure and the densities of the air and other materials. Torricelli's studies on air pressure came about fifty years before Newton, and Bernoulli's theory of fluid flow, which describe the science behind aerodynamic lift, did not come till fifty 50 years after Newton. But, even if the science had existed, Leonardo lacked the mathematical skills to make the best of it.


So it's not surprising that Leonardo had to make a lot of assumptions. This did not so much affect the function of his mechanisms nor the operating principle on which they were based, rather it affected the scale and proportions of the components and the force or power needed to operate them. His armoured tank would have been immensely heavy and difficult to manoeuvre, and it's naval version would have sunk unless its buoyancy was improved. The wooden gears used would probably have been unable to transmit the enormous forces required to move these heavy vehicles. The repeated recoil forces on his multiple-barrelled guns may have shattered their mounts, and his flying machines were very flimsy with inadequate area of the wings as well as the level of human power needed to keep them aloft. So there was nothing fundamentally wrong with most of his designs and most of the shortcomings could have been overcome with iterative development and testing programmes to refine the designs. Unfortunately Leonardo never had that opportunity.


"Leonardo the Myths"

Leonardo was indeed a genius but his reputation has also been enhanced or distorted by uncritical praise. Speculation, rather than firm evidence, about the performance of some of the mechanisms mentioned in his notebooks and what may have been in the notebooks which have been lost, has incorrectly credited him with the invention of the telescope, mathematical calculating machines and the odometer to name just three examples.

Though he did experiment with optics and made drawings of lenses, he never mentioned in his notes, a telescope, or what he may have seen with it, so it is highly unlikely that he invented the telescope.

As for his so called calculating machine: It looked very similar to the calculator made by Pascal 150 years later but it was in fact just a counting machine since it did not have an accumulator to facilitate calculations by holding two numbers at a time in the machine as in Pascal's calculator.

Leonardo's "telescope" and "calculating machine" are examples of uninformed speculation from tantalising sketches made, without corresponding explanations, in his notes. Such speculation is based on the reasoning that, if one of his sketches or drawings "looks like" some more recent device or mechanism, then it "must be" or actually "is" an early example of such a device. Leonardo already had a well deserved reputation as a genius without this unnecessary gold plating.

Similarly regarding the odometer: The claim by some, though not by Leonardo himself, that he invented the odometer implies that he was the first to envisage the concept of an odometer. The odometer was in fact invented by Vitruvius 15 centuries earlier. Leonardo invented "an" odometer, not "the" odometer. Many inventions are simply improvements, alternatives or variations, of what went before. Without a knowledge of precedents, it is a mistake to extrapolate a specific case to a general conclusion. Leonardo's design was based on measuring the rotation of gear wheels, whereas Vitruvius' design was based on counting tokens. (Note that Vitruvius also mentions in his "Ten Books on Architecture", designs for trebuchets, water wheels and battering rams protected by mobile siege sheds or armoured vehicles which were called "tortoises".)

It is rare to find an invention which depends completely on a unique new concept and many perfectly good inventions are improvements or alternatives to prior art. This applies to some of Leonardo's inventions just as it does to the majority of inventions today. Nobody would (or should) claim that Leonardo invented the clock when his innovation was to incorporate a new mechanical movement into his own version of a clock, nor should they denigrate his actual invention.


It's a great pity that Leonardo kept his works secret and that they remained unseen for so many years after his death. How might technology have advanced if he had been willing to share his ideas, to explain them to his contemporaries and to benefit from their comments?


1492 Discovery of the New World by Christopher Columbus showed that the Earth still held vast unknowns indirectly giving impetus to the scientific revolution.


1499 The first patent for an invention was granted by King Henry VI to Flemish-born John of Utynam for a method of making stained glass, required for the windows of Eton College giving John a 20-year monopoly. The Crown thus started making specific grants of privilege to favoured manufacturers and traders, signified by Letters Patent, open letters marked with the King's Great Seal.

The system was open to corruption and in 1623 the Statute of Monopolies was enacted to curb these abuses. It was a fundamental change to patent law which took away the rights of the Crown to create trading monopolies and guaranteed the inventor the legal right of patents instead of depending on the royal prerogative. So called patent law, or more generally intellectual property law, has undergone many changes since then to encompass new concepts such as copyrights and trademarks and is still evolving as and new technologies such as software and genetics demand new rules.


1500 to 1700 The Scientific Revolution and The Age of Reason

Up to the end of the sixteenth century there had been little change in the accepted scientific wisdom inherited from the Greeks and Romans. Indeed it had even been reinforced in the thirteenth century by St. Thomas Aquinas who proclaimed the unity of Aristotelian philosophy with the teachings of the church. The credibility of new scientific ideas was judged against the ancient authority of Aristotle, Galen, Ptolemy and others whose science was based on rational thought which was considered to be superior to experimentation and empirical methods. Challenging these conventional ideas was considered to be a challenge to the church and scientific progress was hampered accordingly.

In medieval times, the great mass of the population had no access to formal education let alone scientific knowledge. Their view of science could be summed up in the words of Arthur C. Clarke, "Any sufficiently advanced technology is indistinguishable from magic".


Things began to change after 1500 when a few pioneering scientists discovered, and were able to prove, flaws in this ancient wisdom. Once this happened others began to question accepted scientific theories and devised experiments to validate their ideas. In the past, such challenges had been hampered by the lack of accurate measuring instruments which had limited the range of experiments that could be undertaken and it was only in the seventeenth century that instruments such as microscopes, telescopes, clocks with minute hands, accurate weighing equipment, thermometers and manometers started to become available. Experimenters were then able to develop new and more accurate measurement tools to run their experiments and to explore new scientific territories thus accelerating the growth of new scientific knowledge.

The printing press was the great catalyst in this process. Scientists could publish their work, thus reaching a much greater audience, but just as important, it gave others working in the field, access to the latest developments. It gave them the inspiration to explore these new scientific domains from a new perspective without having to go over ground already covered by others.

The increasing use of gunpowder also had its effect. Cannons and hand held weapons swept the aristocratic knight from the field of battle. Military advantage and power went to those with the most effective weapons and heads of state began to sponsor experimentation in order to gain that advantage.

Scientific method thus replaced rational thought as a basis for developing new scientific theories and over the next 200 years scientific theories and scientific institutions were transformed, laying the foundations on which the later Industrial Revolution depended.


Some pioneers are shown below.


  • (600 B.C.) Thales The original thinker, deprecated by Aristotle.
  • (300 B.C.) Euclid promoted the disciplines of proof, logic and deductive reasoning in mathematics.
  • (269 B.C.) Archimedes followed Euclid's disciplines and was the first to base engineering inventions on mathematical principles.
  • (1450) Johannes Gutenberg did not make any scientific breakthroughs but his printing press was one of the most important developments and essential prerequisites which made the scientific revolution possible. For the first time it became easy to record information and to disseminate knowledge making learning and scholarship available to the masses.
  • (1492) Christopher Columbus' discovery of the New World showed that the World still held vast unknowns sparking curiosity.
  • (1514) Nicolaus Copernicus challenged the accepted wisdom of Ptolemy which had reigned supreme for 1400 years, that the Earth was the centre of the Universe, and proposed instead that the Universe was centred on the Sun.
  • (1543) Andreas Vesalius showed that conventional theories about human anatomy, unquestioned since they were developed over 1300 years earlier by Galen, were incorrect.
  • (1576) Tycho Brahe made detailed astronomical measurements to enable predictions of planetary motion to be based on observations rather than logical deduction.
  • (1600) William Gilbert an early advocate of scientific method rather than rational thought.
  • (1605) Francis Bacon like Gilbert, a proponent of scientific method.
  • (1608) Hans Lippershey invented the telescope, thus providing the tools for much more accurate observations, and deeper understanding of the cosmos.
  • (1609) Johannes Kepler developed mathematical relationships, based on Brahe's measurements which enabled planetary movements to be predicted.
  • (1610) Galileo Galilei demonstrated that the Earth was not the centre of the Universe and in so doing, brought himself into serious conflict with the church.
  • (1628) William Harvey outlined the true function of the heart correcting misconceptions about the functions and flow of blood as well as classical myths about its purpose.
  • (1642) Pascal together with Fermat (1653) described chance and probability in mathematical terms, rather than fate or the will of the Gods.
  • (1643) Evangelista Torricelli's invention of the barometer led to an understanding of the properties of air.
  • (1644) René Descartes challenged Aristotle's logic based on rational thinking with his own mathematical logic and attempted to describe the whole universe in mathematical terms. He was still not convinced of the value of experimental method.
  • (1656) Christiaan Huygens invented the pendulum clock enabling scientific experiments to be supported by accurate time measurements for the first time.
  • (1660) The Royal Society was founded in London to encourage scientific discovery and experiment.
  • (1661) Robert Boyle introduced the concept of chemical elements based on empirical observations rather than Aristotle's logical earth, fire, water and air.
  • (1663) Otto von Guericke devised an experiment using his Magdeburg Spheres to disprove Aristotle's claim that a vacuum can not exist.
  • (1665) Robert Hooke invented the microscope which opened a window on the previously unseen microscopic world raising questions about life itself.
  • (1666) The French Académie des Sciences was founded in Paris.
  • (1668) Antonie van Leeuwenhoek expanded on Hooke's observations and established microbiology.
  • (1687) Isaac Newton derived a set of mathematical laws which provided the basis of a comprehensive understanding of the physical world.
  • (1700) The German Academy of Sciences was founded in Berlin.

The Age of Reason marked the triumph of evidence over dogma. Or did it? There remained one great mystery yet to be unravelled but it was another 200 years before it came up for serious consideration: The Origin of Species.


1514 Polish polymath and Catholic cleric, Nicolaus Copernicus mathematician, economist, physician, linguist, jurist, and accomplished statesman with astronomy as a hobby published and circulated to a small circle of friends, a preliminary draft manuscript in which he described his revolutionary idea of the heliocentric universe in which celestial bodies moved in circular motions around the Sun, challenging the notion of the geocentric universe. Such heresies were unthinkable at the time. They not only contradicted conventional wisdom that the World was the centre of the universe but worse still they undermined the story of creation, one of the fundamental beliefs of the Christian religion. Dangerous stuff!

It was not until around 1532 that Copernicus completed the work which he called De Revolutionibus Orbium Coelestium "On the Revolutions of the Heavenly Spheres" but he still declined to publish it. Historians do not agree on whether this was because Copernicus was unsure that his observations and his calculations would be sufficiently robust enough to challenge Ptolemy's Almagest which had survived almost 1400 years of scrutiny or whether he feared the wrath of the church. Copernicus' model however was simpler than Ptolemy's geocentric model and matched more closely the observed motions of the planets. He eventually agreed to publish the work at the end of his life and the first printed copy was reportedly delivered to him on his deathbed, at the age of seventy, in 1543.

As it turned out, "De Revolutionibus Orbium Coelestium" was put on the Catholic church's index of prohibited books in 1616, as a result of Galileo's support for its revolutionary theory, and remained there until 1835.


One of the most important books ever written, De Revolutionibus' ideas ignited the Scientific Revolution (See above), but only about 300 or 400 were printed and it became known (recently) as "the book that nobody read".


1533 Frisian (now Netherlands) mathematician and cartographer Gemma Frisius proposed the idea of triangulation for surveying and producing maps. Because it was often inconvenient or difficult to measure large distances directly, he described how the distance to a distant target location could be determined locally, without actually going there, by using only angle measurements. By forming triangles to the target from reference points on a local baseline, and measuring the angles between the baseline and the lines between the reference points and the target at the vertex of the triangle, the distance to the target could be calculated using simple trigonometry. It was thus easier to survey the countryside and construct maps by dividing the area into triangles rather than squares. This method was first used in 600 B.C. by Greek philosopher Thales but was not yet commonly adopted. Triangulation is still used today in applications from surveying to celestial navigation.


In 1553 Frisius was also the first to describe how longitude could be determined by comparing local solar time with the time at some reference location provided by an accurate clock but no such clocks were available at the time.


1543 Belgian physician and professor at the University of Padua, Andries van Wesel, more commonly known as Vesalius published De Humani Corporis Fabrica (On the Structure of the Human Body), one of the most influential books on human anatomy. He carried out his research on the corpses of executed criminals and discovered that the research and conclusions published by the previous, undisputed authority on this subject, Galen, could not possibly have been based on an actual human body. Versalius was one of the first to rely on direct observations and scientific method rather than rational logic as practiced by the ancient philosophers and in so doing overturned 1300 years of conventional wisdom. Such challenges to long held theories marked the start of the Scientific Revolution.


1551 Damascus born Muslim polymath, Taqi al-Din, working in Egypt, described an impulse turbine used to drive a rotating spit over a fire. It was simply a jet of steam impinging on the blades of a paddle wheel mounted on the end of the spit. Like Hero's reaction turbine it was not developed at the time for use in more useful applications.

See more about Impulse Turbines.

See more about Steam Engines.


1576 Danish astronomer and alchemist, Tycho Brahe, built an observatory where, with his assistant Johannes Kepler, he gathered data with the aim of constructing a set of tables for calculating the position of the planets for any date in the past or in the future. He lived before the invention of the telescope and his measurements were made with a cross staff, a simple mechanical device similar to a protractor used for measuring angles. Nevertheless, despite his primitive instruments, he set new standards for precise and objective measurements but he still relied on empirical observations rather than mathematics for his predictions.


Brahe accepted Copernicus' heliocentric model for the orbits of planets which explained the apparent anomalies in their orbits exhibited by Ptolemy's geocentric model, however he still clung on to the Ptolemaic model for the orbits of the Sun and Moon revolving around the Earth as this fitted nicely with the notion of Heaven and Earth and did not cause any conflicts with religious beliefs.

However, using the data gathered together with Brahe, Kepler was able to confirm the heliocentric model for the orbits of planets, including the Earth, and to derive mathematical laws for their movements.


See also the Scientific Revolution


A wealthy, hot-headed and extroverted nobleman, said to own one percent of the entire wealth of Denmark, Brahe had a lust for life and food. He wore a gold prosthesis in place of his nose which it was claimed had been cut off by his cousin in a duel over who was the better mathematician.


In 1601, Brahe died in great pain in mysterious circumstances, eleven days after becoming ill during a banquet. Until recently the accepted explanation of the cause of death, provided by Kepler, was that it was an infection arising from a strained bladder, or from rupture of the bladder, resulting from staying too long at the dining table.

By examining Brahe's remains in 1993, Danish toxicologist Bent Kaempe determined that Brahe had died from acute Mercury poisoning which would have exhibited similar symptoms. Among the many suspects, in 2004 the finger was firmly pointed by writers Joshua and Anne-Lee Gilder, at Kepler, the frail, introverted son of a poor German family.

Kepler had the motive, he was consumed by jealousy of Brahe and he wanted his data which could make him famous but it had been denied to him. He also had the means and the opportunity. After Tycho's death when his family were distracted by grief, Kepler simply walked away with the priceless observations which belonged to Tycho's heirs.


With only a few tantalising facts to go on, historians attempt to construct a more complete picture of what happened in the distant past. In Brahe's case there could be another explanation of his demise. From the available facts it could be concluded the Brahe's death was due to an accidental overdose of Mercury, which at the time was the conventional medication prescribed for the treatment for syphilis, or from syphilis itself. This is corroborated by the fact that one of the symptoms of the advanced state of the disease is the loss of the nose due to the collapse of the bridge tissue. Brahe's hedonistic lifestyle could well have made this a possibility. Kepler's actions in purloining of Brahe's data could have been a simple act of opportunism rather than the motivation for murder.


1593 The thermometer invented by Italian astronomer and physicist Galileo Galilei. It has been variously called an air thermometer or a water thermometer but it was called a thermoscope at the time. His "thermometer" consisted of a glass bulb at the end of a long glass tube held vertically with the open end immersed in a vessel of water. As the temperature changed the water would rise or fall in the tube due to the contraction or expansion of the air. It was sensitive to air pressure and could only be used to indicate temperature changes since it had no scale. In 1612 Italian Santorio Santorio added a scale to the apparatus creating the first true thermometer and for the first time, temperatures could be quantified.


There is no evidence that the decorative, so called, Galileo thermometers based on the Archimedes principle were invented by Galileo or that he ever saw one. They are comprised of several sealed glass floats in a sealed liquid filled glass cylinder. The density of the liquid varies with the temperature and the floats are designed with different densities so as to float or sink at different temperatures. There were however thriving glass blowing and thermometer crafts based in Florence (Tuscany) where the Academia del Cimento, which was noted for its instrument making, produced many of these thermometers also known as Florentine thermometers or Infingardi (Lazy-Ones) or Termometros Lentos (Slow) because of the slowness of the motion of the small floating spheres in the alcohol of the vial. It is quite likely that these designs were the work of the Grand Duke of Tuscany Ferdinand II who had a special interest in thermometers and meteorology.


1595 Swiss clockmaker Jost Burgi invented the gravity remontoire - constant force escapement which improved the accuracy of timekeeping mechanisms by over an order of magnitude.

See more about the remontoire


1600 William Gilbert of Colchester, physician to Queen Elizabeth I of England published "De Magnete" (On the Magnet) the first ever work of experimental physics. In it he distinguished for the first time static electric forces from magnetic forces. He discovered that the Earth is a giant magnet just like one of the stones of Peregrinus, explaining how compasses work. He is credited with coining the word "electric" which comes from the Greek word "elektron" meaning amber.


Many wondrous powers have been ascribed to magnets and to this day magnetic bracelets are believed by some to have therapeutic benefits. In Gilbert's time it was believed that an adulteress could be identified by placing a magnet under her pillow. This would cause her to scream or be thrown out of bed as she slept.

Gilbert proved amongst other things that the smell of garlic did not affect a ship's compass. It is not known whether he experimented with adulteresses in his bed.


Gilbert was the English champion of the experimental method of scientific discovery considered inferior to rational thought by the Greek philosopher Aristotle and his followers. He held the Copernican or heliocentric view, dangerous at the time, that the Sun, not the Earth was not the centre of the universe. He was a contemporary of the Italian astronomer Galileo Galilei (1564-1642) who made a principled stand in defence of the founding of physics on scientific method and precise measurements rather than on metaphysical principles and formal logic. These views brought Galileo into serious confrontation with the church and he was tried and punished for his heresies.

Experimental method rather than rational thought was the principle behind the Scientific Revolution which separated Science (theories which can be proved) from Philosophy (theories which can not be proved).


See also Bertrand Russell's definition of philosophy.


Gilbert died of Bubonic plague in 1603 leaving his books, globes, instruments and minerals to the College of Physicians but they were destroyed in 1666 in the great fire of London which mercifully also brought the plague to an end.


1601 An early method of hardening wrought iron to make hard edged tool steel and swords, known as the cementation process, was first patented by Johann Nussbaum of Magdeburg in Germany though the process was already known in Prague in 1574. It was also patented once more in England by William Ellyot and Mathias Meysey in 1614.

The method employed a solid diffusion process involving the diffusion of carbon into the wrought iron to increase its carbon content to between 0.5% and 1.5%. Wrought iron rods or bars were covered with powdered charcoal (called cement) and sealed in a long airtight stone or clay lined brick box, like a sarcophagus, and heated to 1,000°C in a furnace for between one and two weeks. The nature of the difusion process, resulted in a non-uniform carbon content which was high near the surface of the bar, diminishing towards its centre and the bars could still contain slag inclusions from the original precursor bloom from which the wrought iron was made. The process also caused blistering of the steel, hence the product made this way was called blister steel.


See more about Iron and Steel Making


1603 Italian shoemaker and part-time alchemist from Bologna, Vincenzo Cascariolo, searching for the "Philosopher's Stone" for turning common metals into Gold discovered phosphorescence instead. He heated a mixture of powdered coal and heavy spar (Barium sulphate) and spread it over an iron bar. It did not turn into Gold when it cooled, as expected, but he was astonished to see it glow in the dark. Though the glow faded it could be "reanimated" by exposing it to the sun and so became known as "lapis solaris" or "sun stone", a primitive method of solar energy storage in chemical form.


1605 A five digit encryption code consisting only of the letters "a" and "b" giving 32 combinations to represent the letters of the alphabet was devised by English philosopher and lawyer Francis Bacon. He called it a biliteral code. It is directly equivalent to the five bit binary Baudot code of ones and zeros used for over 100 years for transmitting data in twentieth century telegraphic communications.

More importantly Bacon, together with Gilbert, was an early champion of scientific method although it is not known whether they ever met.

Bacon criticized the notion that scientific advances should be made through rational deduction. He advocated the discovery of new knowledge through scientific experimentation. Phenomena would be observed and hypotheses made based on the observations. Tests would then be conducted to verify the hypotheses. If the tests produced reproducible results then conclusions could be made.


In his 1605 publication "The Advancement of Learning", Bacon coined the dictum "If a man will begin with certainties, he will end up with doubts; but if he will be content to begin with doubts, he shall end up in certainties".


See also the Scientific Revolution.


Bacon died as a result of one of his experiments. He investigated preserving meat by stuffing a chicken with snow. The experiment was a success but Bacon died of bronchitis contracted either from the cold chicken or from the damp bed, reserved for VIP's and unused for a year, where he was sent to recover from his chill.


There are many "Baconians" who claim today that at least some of Shakespeare's plays were actually written by Bacon. One of the many arguments put forward is that only Bacon possessed the necessary wide range of knowledge and erudition displayed in Shakespeare's plays.


1608 German born spectacle lens maker Hans Lippershey working in Holland, applied for a patent for the telescope for which he envisioned military applications. The patent was not granted on the basis that "too many people already have knowledge of this invention". Nevertheless, Lippershey's patent application was the first documented evidence of such a device. Legend has it that the telescope was discovered by accident when Lippershey, or two children playing with lenses in his shop, noticed that the image of a distant church tower became much clearer when viewed through two lenses, one in front of the other. The discovery revolutionised astronomy. Up to that date the pioneering work of Copernicus, Brahe and Kepler had all been based on many thousands of painstaking observations made with the naked eye without the advantage of a telescope.


See also the Scientific Revolution


1609 On the death of Danish Imperial Mathematician Tycho Brahe in 1601, German Mathematician Johannes Kepler inherited his position along with the astronomical data that Brahe had gathered over many years of pains-taking observations. From this mass of data on planetary movements, collected without the help of a telescope, Kepler derived three Laws of Planetary Motion, the first two published as "Astronomia Nova" in 1609 and the third as "Harmonices Mundi" in 1619. These laws are:

  • The Law of Orbits: All planets move in elliptical orbits, with the Sun at one focus.
  • The Law of Areas: A line that connects a planet to the Sun sweeps out equal areas in equal times. See Diagram
  • The Law of Periods: The square of the period of any planet is proportional to the cube of the semi major axis of its orbit.

Kepler's laws were the first to enable accurate predictions of future planetary orbits and at the same time they effectively disproved the Aristotelian and Ptolemaic model of geocentric planetary motion. Further evidence was provided during the same period by Galileo (See following entry).


Kepler derived these laws empirically from the years of data gathered by Brahe, a monumental task, but he was unable to explain the underlying principles involved. The answer was eventually provided by Newton.


Recently Kepler's brilliance has been tarnished by forensic studies which suggest that he murdered Brahe in order to get his hands on his observations. (See Brahe)


See also the Scientific Revolution


1610 Italian physicist and astronomer Galileo Galilei was the first to observe the heavens through a refracting telescope. Using a telescope he had built himself, based on what he had heard about Lippershey's recent invention, he observed four moons, which had not previously been visible with the naked eye, orbiting the planet Jupiter. This was revolutionary news since it was definitive proof that the Earth was not the centre of all celestial movements in the universe, overturning the geocentric or Ptolemaic model of the universe which for more than a thousand years had been the bedrock of religious and Aristotelian scientific thought. At the same time his observations of mountains on the Earth's moon contradicted Aristotelian theory, which held that heavenly bodies were perfectly smooth spheres.

Publication of these observations in his treatise Sidereus Nuncius (Starry Messenger) gave fresh impetus to the Scientific Revolution in astronomy started by the publication of Copernicus' heliocentric theory almost 100 years before, but brought Galileo into a confrontation with the church. Charged with heresy, Galileo was made to kneel before the inquisitor and confess that the heliocentric theory was false. He was found guilty and sentenced to house arrest for the rest of his life.


In 1612, having determined that Jupiter's four brightest natural satellites, Io, Europa, Ganymede and Callisto, (also known as the Galilean Moons), made regular orbits around the planet, Galileo noted that the time at which they passed a reference position in their orbits, such as the point at which they begin to eclipse the planet, would be both regular and the same for any observer in the World. This could therefore be used as the basis for a universal timer or clock which in turn could be used to determine longitude.


Galileo carried out many investigations and experiments to determine the laws governing mechanical movement. He is famously reputed to have demonstrated that all bodies fall to Earth at the same rate, regardless of their mass by dropping different sized balls from the top of the Leaning Tower of Pisa, thus disproving Aristotle's theory that the speed of falling bodies is directly proportional to their weight but there is no evidence that Galileo actually performed this experiment. However such an experiment was also performed by Simon Stevin in 1586.

In 1971, Apollo 15 astronaut David Scott repeated Galileo's experiment on the airless Moon with a feather and a hammer demonstrating that, unhampered by any atmosphere, they both fell to the ground at the same rate.


Galileo actually attempted to measure the rate at which a body falls to Earth under the influence of gravity, but he did not have an accurate method of measuring the time since the speed of the falling body was too fast and the duration too short. He therefore determined to "dilute" the effect of gravity by rolling a ball down an inclined plane to slow it down and increase the transit time. He expected to find that the distance travelled would increase by a fixed amount for each fixed increment in time. Instead he discovered that the distance travelled is proportional to the square of the time. See more about Galileo's "Laws of Motion".


In 1602 his inquisitive mind led him to make a remarkable discovery about the motion of pendulums. While sitting in a cathedral he observed the swinging of a chandelier and using his pulse to determine the period of its swing, he was greatly surprised to find that as the movement of the pendulum slowed down, its period remained the same. His curiosity piqued he followed up with a series of experiments and determined that the only factor affecting the period of the pendulum's swing was its length. It was independent of the arc of the swing,the weight of the pendulum bob and the speed of the swing. By using pendulums of different length Galileo was able to produce timing devices which were much more accurate than his pulse.

It can't have been easy, counting and keeping a running total of pendulum swings and heart rate pulses at the same time.

About 40 years later, Christiaan Huygens developed a mathematical equation defining the period of the pendulum and went on to use the pendulum in the construction of the first accurate clocks.


See more about Oscillators and Timekeeping


1614 Scottish nobleman John Napier Baron of Merchiston, published Mirifici Logarithmorum Canonis Descriptio - Description of the Marvellous Canon (Rule) of Logarithms in which he described a new method for carrying out tedious multiplication and division by simpler addition and subtraction, together with a set of tables he had calculated for the purpose. The logarithmic tables contained 241 entries which had taken him 20 years to compute.

Napier's logarithms were not the logarithms we would recognise today. Neither were they Natural logarithms with a base of "e" as is often misquoted. Natural logarithms were invented by Euler over a century later.

Napier was aware that numbers in a geometric series could be multiplied by adding their exponents (powers) for example q2 multiplied by q3 = q5, and that division could be performed by subtracting the exponents. Simple though the idea of logarithms may be, it had not been considered before because with a simple base of 2 and exponent n, where n is a whole number, the numbers represented by 2n become very large very quickly as n increases. This meant there was no obvious way of representing the intervening numbers. The idea of fractional exponents would have, (and did eventually) solve this problem but at the end of the sixteenth century, people were just getting to grips with the notion of zero and they were not comfortable with idea of fractional powers.

To design a way of representing more numbers, while still retaining whole number exponents, Napier came up with the idea of making the base number smaller. But, if the base number was very small there would be too many numbers. Using the number 1 (unity) as a base would not work either since all the powers of 1 are equal to 1. He therefore chose (1-10-7) or 0.9999999 as the base from which he constructed his tables. Napier named his exponents logarithms from the Greek logos and arithmos roughly translated as ratio-number.


Napier's publication was an instant hit with astronomers and mathematicians. Among these was Henry Briggs, mathematics professor at Gresham College, London who travelled 350 miles to Edinburgh the following year to meet the inventor of this new mathematical tool.

He stayed a month with Napier and in discussions they considered two major improvements that they both readily accepted. Briggs suggested that the tables should be constructed from a base of 10 rather than (1-10-7) and this meant adopting fractional exponents and Napier agreed that the logarithm of 1 should be 0 (zero) rather than the logarithm of 107 being 0 as it was in his original tables. Briggs' reward was to have the job of calculating the new logarithmic tables which he eventually completed and published as Arithmetica Logarithmica in 1624. His tables contained 30,000 natural numbers to 14 places.


Meanwhile in 1617 Napier published a description of a new invention in his Rabdologiae, a "collection of rods". It was a practical method of multiplication using "numbering rods" with numbers marked off on them. Known as Napier's Bones", surprisingly they did not use his method of logarithms.(See also the following item - Gunter)

Already old and frail, Napier died the same year without seeing the final results of his work.

Briggs' logarithms are still in use today, now known as common logarithms.


Napier himself considered his greatest work to be a denunciation of the Roman Catholic Church which he published in 1593 as A Plaine Discovery of the Whole Revelation of St John.


1620 Edmund Gunter professor of astronomy at Gresham College, where Briggs was professor of mathematics, made a straight logarithmic scale engraved on a wooden rod and used it to perform multiplication and division using a set of dividers or calipers to add or subtract the logarithms. The predecessor to the slide rule. (See the following item)


1621 English mathematician and clergyman, William Oughtred, friend of Briggs and Gunter from Gresham College, put two of Gunter's scales (See previous item) side by side enabling logarithms to be added directly and invented the slide rule, the essential tool of every engineer for the next 350 years until electronic calculators were invented in the 1970s.

Oughtred also produced a circular version of the slide rule.


1628 English physician Robert Harvey published "De Motu Cordis" ("On the Motion of the Heart and Blood") in which he was the first to describe the circulation of blood and how it is pumped around the body by the heart, dispelling any remaining Aristotelian beliefs that the heart was the seat of intelligence and the brain was a cooling mechanism for the blood.


See also the Scientific Revolution


1629 Italian Jesuit priest Nicolo Cabeo published Philosophia Magnetica in which electric repulsion is identified for the first time.


1636 The first reasonably accurate measurement of the speed of sound was made by French polymath Marin Mersenne who determined it to be 450 m/s (1476 ft/s). This compares with the currently accepted velocity of 343 m/s (1,125 ft/s; 1,235 km/h; 767 mph), or a kilometre in 2.91 seconds or a mile in 4.69 seconds in dry air at 20 °C (68 °F).

(For reference, note also that the speed of light is 300,000,000 m/s compared with the speed of sound of around 343 m/s.)


Seventeenth century methods of measuring the speed of sound were usually based on observations of artillery fire and were notoriously inaccurate. Since the transit time of light over a given distance is negligible compared with the transit time of sound, by measuring the delay between seeing the powder flash from a distant cannon and hearing the explosion, the time for the sound to cover a given distance and hence the speed could be estimated. For practical measurements the distance of the artillery from the observer had to be a kilometre or more to obtain a reasonably long delay of a few seconds which could be measured by available means. Even so, the only available methods for measuring such short times were by means of a pendulum or by counting the observer's own pulse beats which were hopelessly imprecise, error prone and dependant on operator reaction times.

Furthermore, because the effects of temperature, pressure, density, wind and moisture content of the air on the speed of propagation were unknown, they were not taken into account in the measurements.


Variations on the above procedure are still used today as traditional folk methods of estimating the distance to a lightning strike by counting the seconds between the flash and its following thunderclap.


Alternative set-ups, used at the time, for calculating the speed of sound involved creating a sharp noise in front of a wall or cliff and measuring the time delay before hearing its echo. The round trip distance to the wall and back divided by the time gives the speed of sound. Echo delays in practical, controlled sites are usually very short. A distance of 100 metres to the reflecting surface (200 metres round trip) results in an echo delay of only around half a second. This leads to great difficulties in measuring the time delay with the crude equipment available.


Milestones in the Understanding of Acoustics and Sound Propagation


  • (Circa 350 B.C.) Aristotle was one of the first to speculate on the transmission of sound, writing in his in his treatise "On the Soul" that "sound is a particular movement of air".

  • 1508 Leonardo Da Vinci, using a water analogy, showed in drawings that sound travels in waves like the waves on a pond..

  • 1635 Pierre Gassendi, French priest, philosopher, scientific chronicler and experimentalist and a friend of Mersenne, is reported to have measured the speed of sound as a somewhat high 478 m/s (1568 ft/s), though this experiment was not documented in his workbooks. Using the artillery method he compared the low rumbling sound from a cannon with the higher pitched sound of a musket from the same distance and concluded that the speed of sound is independent of the pitch (frequency).
  • Gassendi was an anatomist and did not believe the wave theory of sound. He believed that sound and light are carried by particles which are not affected by the surrounding medium of air or wind through which they travel. In other words, sound was a stream of atoms emitted from the sounding body and the speed of sound is the velocity of the moving atoms, and its frequency is the number of atoms emitted per second.


  • 1636 Marin Mersenne, in contrast to his friend Gassendi, held the more rational view that sound travelled in waves like the ripples on water. Using a pendulum to measure the time between the flash of exploding gunpowder and the arrival of the sound. He determined the speed of sound to be 450 m/s (1476 ft/s). As measurement techniques improved it was revised to a more accurate 316 m/s (1036 ft/s).
  • He also established that the intensity of sound, like that of light, is inversely proportional to the distance from its source and showed the speed to be independent of pitch as well as intensity (loudness).


    The same year Marsenne also published his "Harmonie Universelle" describing the acoustic behaviour of stretched strings as used in musical instruments which provided the basis for modern musical acoustics. The relationship between frequency and the tension, weight, and the length of the strings was expressed in three laws known as Mersenne's Laws as follows:

    The fundamental frequency f0 of a vibrating string (that is without harmonics) is:

    1. Inversely proportional to the length L of the string (also known as Pythagoras Law).  f0∝1/L
    2. Inversely proportional to the square root of the mass per unit length μ.                        f0∝1√/μ
    3. Proportional to the square root of the stretching force F.                                               f0∝F

    The three laws can be combined in a single exression thus:

    f0=1/2L. √(F/μ)


    Known as the "Father of Acoustics", Mersenne regularly corresponded with the leading mathematicians, astronomers and philosophers of the day, and in 1635 set up the informal, private Académie Parisienne where140 correspondents shared their research. This was the direct precursor of the French Académie des Sciences established by Colbert in 1666


  • 1660 Giovanni Alfonso Borelli and Vincenzo Viviani working at the Accademia del Cimento in Florence improved the sound timing techniques resulting in more consistent results and a value of 350 m/s (1148 ft/s) was generally accepted as the speed of sound.

  • 1660 Robert Boyle using an improved vacuum pump, showed that the sound intensity from a bell housed in a a glass chamber diminished to zero as the air was pumped out. From this he concluded that sound can not be transmitted through a vacuum and that sound is a pressure wave which requires a medium such as air to transmit the sound. See also the luminiferous aether and the transmission of light.

  • 1687 Isaac Newton in his Principia Mathematica showed that the speed of sound depended on the density and compressibility of the medium through which it travelled and could be calculated from the following relationship using air as an example.
  • V = √(P/ρ)

    Where: V is the sound velocity, P is the atmospheric pressure and ρ is the density of the air and the ratio P/ρ is a measure of its compressability.

    Newton used echoes from a wall at the end of an outdoor corridor at Trinity College, Cambridge to estimate the speed of sound to verify his calculations but the calculated value of 295 m/s (968 f/s), was consistenly around 16% less than his measured experimental values and those achieved by others at the time.

    The unexplained difference is attributed to the assumptopns made and not made. These include the following:

    • Newton used a mechanical interpretation of sound as being "pressure" pulses transmitted through adjacent fluid particles.
    • When a pulse is propagated through a fluiid, particles of the fluid move in simple harmonic motion at a constant frequency and if it is true for one particle it must be true for all adjacent particles.
    • Possible errors due to temperature, pressure, moisture content and wind, elasticity of the air and whether they were constant, proportional or non-linear were mostly unknown at the time and were consequently ignored.

  • 1740 Giovanni Lodovico Bianconi, an Italian doctor demonstrated that the speed of sound in air increases with temperature. This is because molecules at higher temperatures have more energy and vibrate more quickly and since they vibrate faster, they can transmit sound waves more quickly.

  • 1746 Jean-Baptiste le Rond d'Alembert, a French philosopher, mathematician and music theorist deduced the Wave Equation relating the velocity of a sound wave v to its frequency f and wavelength λ, based on studies of vibrating strings, as follows:
  • v = f λ

    The relationship also applies to electromagnetic waves.

     

  • 1802 Pierre-Simon Laplace and his young protégé Jean-Baptiste Biot rectified Newton's troublesome error and followed up by publishing a formal correction in 1816. They explained that in a pressure wave, when the sound wave compresses and rarefies the air in quick succession, Boyles Law does not apply because the temperature does not remain constant. Heat is liberated during compression part of the cycle, but because of the relatively high frequency of the sound wave, the heat does not have time to dissipate or be reabsorbed during the low pressure half of the cycle. This causes the local temperature to increase, in turn increasing the local pressure and raising the speed of the sound correspondingly. Thus Newton's calculations were brought into line with the experimental results.
  • In modern terms, the rapidly fluctuating compression and expansion of air through which the sound wave passes is an adiabatic process, not an isothermal process).


1642 At the age of eighteen, French mathematician and physicist, Blaise Pascal constructed a mechanical calculator capable of addition and subtraction. Known as the Pascaline, it was the forerunner of computing machines. Despite its utility, this great innovation failed to capture the imagination (or the attention) of the scientific and commercial public and only fifty were made. Thirty years later it was eclipsed by Leibniz' four function calculator which could perform multiplication and division as well as addition and subtraction.


Pascal also did pioneering work on hydraulics, resulting in the statement of Pascal's principle, that "pressure will be transmitted equally throughout a confined fluid at rest, regardless of where the pressure is applied". He explained how this principle could be used to exert very high forces in a hydraulic press. Such a system would have two cylinders with pistons with different cross-sectional areas connected to a common reservoir or simply connected by a pipe. When a force is exerted on the smaller piston, it creates a pressure in the reservoir proportional to the area of the piston. This same pressure also acts on the larger piston, but because its area is greater, the pressure is translated into a larger force on the larger piston. The difference in the two forces is proportional to the difference in area of the two pistons and the hydraulic, mechanical advantage is equal to the ratio of the areas of the two pistons. Thus the cylinders act in a similar way to a lever, as described by Archimedes, which effectively magnifies the force exerted. 150 years later Bramah was granted a patent for inventing the hydraulic press.

The unit of pressure was recently named the "Pascal" in his honour, replacing the older, more descriptive, pounds per square inch (psi) or Newtons per square metre (N/M2).


Besides hydraulics, Pascal explained the concept of a vacuum. At the time, the conventional Aristotelian view was that the space must be full with some invisible matter and a vacuum was considered an impossibility.


In 1653 Pascal described a convenient shortcut for determining the coefficients of a binomial series, now called Pascal's Triangle and the following year, in response to a request from a gambling friend, he used it to derive a method of calculating the odds of particular outcomes of games of chance. In this case, two players wishing to finish a game early, wanted to divide their remaining stakes fairly depending on their chances of winning from that point. To arrive at a solution, he corresponded with fellow mathematician Fermat and together they worked out the notion of expected values and laid the foundations of the mathematical theory of probabilities.

See Pascal's Triangle and Pascal Probability

Pascal did not claim to have invented his eponymous triangle. It was known to Persian mathematicians in the eleventh and twelfth centuries and to Chinese mathematicians in the eleventh and thirteenth centuries as well as others in Europe and was often named after local mathematicians.


For most of his life Pascal suffered from poor health and he died at the age of 39 after abandoning science and devoting most of the last ten years of his short life to religious studies culminating in the publication (posthumously) of Pensées (Thoughts), a justification of the Christian faith.


See also the Scientific Revolution


1643 Evangelista Torricelli served as Galileo's secretary and succeeded him as court mathematician to Grand Duke Ferdinand II and in 1643 made the world's first barometer for measuring atmospheric or air pressure by balancing the pressure force, due to the weight of the atmosphere, against the weight of a column of mercury. This was a major step in the understanding of the properties of air.


1644 French philosopher and mathematician René Descartes published Principia Philosophiae in which he attempts to put the whole universe on a mathematical foundation reducing the study to one of mechanics. Considered to be the first of the modern school of mathematics, he believed that Aristotle's logic was an unsatisfactory means of acquiring knowledge and that only mathematics provided the truth so that all reason must be based on mathematics.

He was still not convinced of the value of experimental method considering his own mathematical logic to be superior.

His most important work La Géométrie, published in 1637, includes his application of algebra to geometry from which we now have Cartesian geometry. He was also the first to describe the concept of momentum from which the law of conservation of momentum was derived.


See also the Scientific Revolution


Descartes accepted sponsorship by Queen Christina of Sweden who persuaded him to go to Stockholm. Her daily routine started at 5.00 a.m. whereas Descartes was used to rising at at 11 o'clock. After only a few months in the cold northern climate, walking to the palace for 5 o'clock every morning, he died of pneumonia.


1646 The word Electricity coined by English physician Robert Browne even though he contributed nothing else to the science.


1650


1651 German chemist Johann Rudolf Glauber in his "Practise on Philosophical Furnaces" describes a safety valve for use on chemical retorts. It consisted of a conical valve with a lead cap which would lift in response to excessive pressure in the retort allowing vapour to escape and the pressure to fall. The weight of the cap would reseat the valve once the pressure returned to an acceptable level. Today, modern implementations of Glauber's valve are the basis of the pressure vents incorporated into sealed batteries to prevent rupture of the cells due to pressure build up.

In 1658 Glauber published Opera Omnia Chymica "Complete Works of Chemistry", a description of different techniques for use in chemistry which was widely reprinted.


1654 The first sealed liquid-in-glass thermometer produced by the artisan Mariani at the Academia del Cimento in Florence for the Grand Duke of Tuscany, Ferdinand II. It used alcohol as the expanding liquid but was inaccurate in absolute terms, although his thermometers agreed with each other, and there was no standardised scale in use.


1656 Building on Galileo's discoveries, Dutch physicist and astronomer Christiaan Huygens determined that the period P of a pendulum is given by:

P = 2 π √(l/g)

Where l is the length of the pendulum and g is the acceleration due to gravity.

Huygens made the first practical pendulum clock making accurate time measurement possible for the first time. Previous mechanical clocks had pointers which indicated the progress of slowly rising water or slowly falling weights and were only accurate to large fractions of an hour. Huygens clock enabled time to be measured in seconds. It depended on gearing a mechanical indicator to the constant periodic motion of a pendulum. Falling weights drove the pointer mechanism and transferred just enough energy to the pendulum to overcome friction and air resistance so that it did not stop.

Huygens pendulum reduced the loss of time by clocks from about 15 minutes per day to about 15 seconds per day.


In 1675 Huygens published in the French Journal de Sçavans, his design for the balance spring escapement which replaced the clock's pendulum regulator, enabling the design of watches and portable timekeepers.

The pendulum clock however remained the world's most accurate time-keeper for nearly 300 years until the invention of the quartz clock in 1927.


See more about Huygens' Clocks


Huygens also made many astronomical observations noting the characteristics of Saturn's rings and the surface of Mars. He was also the first to make a reasoned estimate of the distance of the stars. He assumed that Sirius had the same brightness as the Sun and from a comparison of the light intensity received here on Earth he calculated the distance to Sirius to be 2.5 trillion miles. It is actually about 20 times further away than this. There was however nothing wrong with Huygens' calculations. It was the assumption which was incorrect. Sirius is actually much brighter than the Sun, but he had no way of knowing that. Had he know the true brightness of Sirius, his estimation would have been much closer to the currently accepted value.


1658 Irish Archbishop James Ussher, following a literal interpretation of the bible, calculated that the Earth was created on the evening of 22 October 4004 B.C..


1660 English mathematician and astronomer, Richard Towneley together with his friend, physician Henry Power investigated the expansion of air at different altitudes by enclosing a fixed mass of air in a Torricelli/Huygens U-tube with its open end immersed in a dish of mercury. They noted the expansion of the enclosed air at different altitudes on a hill near their home and concluded that gas pressure, the external atmospheric pressure of the air on the mercury, was inversely proportional to the volume. They communicated their findings to Robert Boyle a distinguished contemporary chemist who verified the results and published them two years later as Boyle's Law. Boyle referred to Towneley's conclusions as "Towneley's Hypothesis".


See also Towneley's improvements to the pendulum clock timekeeping mechanism. Another of his ideas for which others appear to have got the credit.


1660 The Royal Society founded in London as a "College for the Promoting of Physico-Mathematical Experimental Learning", which met weekly to discuss science and run experiments. Original members included chemist Robert Boyle and architect Christopher Wren.


See also the Scientific Revolution


1661 Huygens invents the U tube manometer, a modification of Torricelli's barometer, for determining gas pressure differences. In a typical "U Tube" manometer the difference in pressure (really a difference in force) between the ends of the tube is balanced against the weight of a column of liquid. The gauges are only suitable for measuring low pressures, most gauges recording the difference between the fluid pressure and the local atmospheric pressure when one end of the tube is open to the atmosphere.


1661 Irish chemist Robert Boyle published "The Sceptical Chymist" in which he introduced the concept of elements. At the time only 12 elements had been identified. These included nine metals, Gold, Silver, Copper, Tin, Lead, Zinc, Iron, Antimony and Mercury and two non metals Carbon and Sulphur all of which had been known since antiquity as well as Bismuth which had been discovered in Germany around 1400 A. D.. Platinum had been known to South American Indians from ancient times but only became to the attention of Europeans in the eighteenth century. Boyle himself discovered phosphorus which he extracted from urine in 1680 taking the total of known elements to fourteen.

Though an alchemist himself, believing in the possibility of transmutation of metals, he was one of the first to break with the alchemist's tradition of secrecy and published the details of his experimental work including failed experiments.


See also the Scientific Revolution


1662 Boyle published Boyle's Law stating that the pressure and volume of a gas are inversely proportional.

PV=K

The first of the Gas Laws.

The relationship was originally discovered in 1660 by English mathematician Richard Towneley but attributed to Boyle. Both Towneley and Boyle were not aware that the relationship was temperature dependent and it was not until 1676 that the relationship was rediscovered by French physicist and priest, Abbé Edme Mariotte, and shown to apply only when the gas temperature is held constant. The law is known as Mariotte's Law in non-English speaking countries.


See also Boyle on Sound Transmission


1663 Otto von Guericke the Burgomaster of Magdeburg in Germany invented the first electric generator, which produced static electricity by rubbing a pad against a large rotating sulphur ball which was turned by a hand crank. It was essentially a mechanised version of Thales demonstrations of electrostatics using amber in 600 B.C. and the first machine to produce an electric spark. Von Guericke had no idea what the sparks were and their production by the machine was regarded at the time as magic or a clever trick. The device enabled experiments with electricity to be carried out but since it was not until 1729 that the possibility of electric conduction was discovered by Gray, the charged sulphur ball had to be moved to the place where the electric experiment took place. Von Guericke's generator remained the standard way of producing electricity for over a century.


Von Guericke was famed more for his studies of the properties of a vacuum and for his design of the Magdeburg Hemispheres experiment. In 1650, in a challenge to Aristotle's theory that a vacuum can not exist, like many of Aristotle's theories, accepted uncritically by philosophers as conventional wisdom for centuries and encapsulated in the saying "Nature abhors a vacuum", von Guericke set about disproving this theory by experimental means. In 1650 he designed a piston based air pump with which he could evacuate the air from a chamber and he used it to create a vacuum in experiments which showed that sound of a bell in a vacuum can not be heard, nor can a vacuum support a candle flame or animal life. To demonstrate the strength of a vacuum, in 1654 he constructed two hollow copper hemispheres which fitted together along a greased flange forming a hollow sphere. When the air was evacuated from the sphere, the external air pressure held the hemispheres together and two teams of horses could not pull them apart, yet when air was released into the sphere the hemispheres simply fell apart.

(See Magdeburg Hemispheres picture).


See also the Scientific Revolution


1665 Boyle published a description of a hydrometer for measuring the density of liquids which was essentially the same as those still in use today for measuring the specific gravity (S.G.) of the electrolyte in Lead Acid batteries. Hydrometers consist of a sealed capsule of lead or mercury inside a glass tube into which the liquid being measured is placed. The height at which the capsule floats represents the density of the liquid.

The hydrometer is however considered to be the invention of Greek mathematician Hypatia.


1665 The Journal des Sçavans (later renamed Journal des Savants), the earliest academic journal to be published in Europe was established. Its content included obituaries of famous men, church history, and legal reports. It was followed two months later by the first appearance of the Philosophical Transactions of the Royal Society.


1665 English polymath, Robert Hooke published Micrographia in which he illustrated a series of very small insects and plant specimens he had observed through a microscope he had constructed himself for the purpose. It included a description of the eye of a fly and tiny sections of plant materials for which he coined the term "cells" because their distinctive walls reminded him of monk's or prison quarters. The publication also included the first description of an optical microscope, and it is claimed, was the inspiration to Antonie van Leeuwenhoek who is often credited himself with the invention of the microscope. Hooke's publication was the first major publication of the recently founded Royal Society and was the first scientific best-seller, inspiring a wide public interest in the new science of microscopy.


See also the Scientific Revolution


1666 The French Académie des Sciences was founded in Paris by King Louis XIV at the instigation of Jean-Baptiste Colbert the French Minister of Finances, as a government organisation with the aim of encouraging and protecting French scientific research. Colbert's dirigiste economic policies were protectionist in nature and involved the government in regulating French trade and industry, echoes of which remain to this day.


1668 Dutch draper, haberdasher and scientist, Antonie Phillips van Leeuwenhoek, possibly inspired by Hooke's Micrographia (see above) made his first microscope. Known as the "Father of Microbiology" he subsequently produced over 450 high quality lenses and 247 microscopes which he used to investigate biological specimens. He was the first to observe and describe single-celled organisms and was also the first to observe and record muscle fibers, bacteria, spermatozoa, and blood flow in capillaries. Van Leeuwenhoek kept the British Royal Society informed of the results of his extensive investigations and eventually became a member himself.


1668 Scottish mathematician and astronomer James Gregory published Geometriae Pars Universalis (The Universal Part of Geometry) in which he proved the fundamental theorem of calculus, that the two operations of differentiation and integration were the inverses of eachother. A system of infinitesimals, which we would now call integration had been used by Archimedes circa 260 B.C. to calculate areas. Later, the concepts of rate and continuity had been studied by Oxford and other scholars since the fourteenth century. But before Gregory, nobody had connected geometry, and the calculation of areas, to motion, and the calculation of velocity.

A more general proof of the relationship between integrals and differentials was developed by English mathematician and theologian Isaac Barrow. It was published posthumously in 1683, by fellow mathematician John Collins, in the Lectiones Mathematicae which summarised Barrow's work, carried out between 1664 and 1677, on the relationships between the estimation of tangents and areas (called quadratures at the time) which mirrored the procedures used in differential and integral calculus.

In 1663 at the age of 23 Barrow was selected as the first Lucasian professor at Cambridge. In 1669 he resigned his position to study divinity for the rest of his life. The Lucasian Chair and the baton for developing the calculus were passed to his student Isaac Newton who was already developing his own ideas on its practical applications around the same time, twenty years before the publication of his Principia Mathematica.


Meanwhile Gregory was one of the first to investigate the properties of transcendental functions and their application to trigonometry and logarithms. A transcendental function "transcends" algebra in that it cannot be expressed in terms of a finite sequence of the algebraic operations of addition, multiplication, and root extraction. Transcendental numbers are not rational, algebraic numbers which can be expressed as integers or ratios of integers. They are the sum of an infinite series. Examples of transcendental functions include the exponential function, the logarithm, and the trigonometric functions. Transcendental numbers include π and the exponential e (Euler's number)

Gregory developed a method of calculating transcendental numbers by a process of successive differentiation to produce an infinite power series which converges towards the result but he was unable to prove conclusively that π and e were transcendental. The proof was confirmed many years later after his untimely death at the age of only 36.

English mathematician Brook Taylor applied Gregory's theory to various trigonometric and logarithmic functions to produce corresponding series which he published in his book Methodus incrementorum directa et inversa in 1715. These series became known as Taylor expansions. Scottish mathematician Colin Maclaurin subsequently developed a modified version or special case of the Taylor expansion, simplifying it by centring it on zero which became known as the Maclaurin expansion.


Taylor and Maclaurin expansions are used extensively today in modern computer systems to provide mathematical approximations for trigonometric, logarithmic and other transcendental functions. See examples.


1675 Boyle discovered that electric force could be transmitted through a vacuum and observed attraction and repulsion.


1676 Prolific English engineer, surveyor, architect, physicist, inventor, socialite and self publicist, Robert Hooke, considered by some to be England's Leonardo (there were others - see Cayley), is now mostly remembered for for Hooke's Law for springs which states that the extension of a spring is proportional to the force applied, or as he wrote it in Latin "Ut tensio, sic vis" ("as is the extension, so is the force"). From this the energy stored in the spring can be calculated by integrating the force times the displacement over the extension of the spring. The force per unit extension is known as the spring constant. Hooke actually discovered his law in 1660, but afraid that he would be scooped by his rival Newton, he published his preliminary ideas as an anagram "ceiiinosssttuv" in order to register his claim for priority. It was not until 1676 that he revealed the law itself. The forerunner of digital time stamping?


In 1657 Hooke was the first to propose using a spring rather than gravity to stimulate the oscillator in clock timekeeping regulators, eliminating the pendulum and enabling much smaller, portable clocks and watches. He envisaged the back and forth bending of a straight flat spring to provide the necessary force, but it was Huygens however who later made the first practical clocks based on this method.

The following year, Hooke invented the Anchor Escapement the essential timekeeping mechanism used in long case (granfather) pendulum clocks for over 200 years until it was gradually replaced by the more accurate deadbeat escapement.

See more about Hooke's clock mechanisms.


Hooke was surveyor of the City of London and assistant to Christopher Wren in rebuilding the city after the great fire of 1666. He made valuable contributions to optics, microscopy, astronomy, the design of clocks, the theories of springs and gases, the classification of fossils, meteorology, navigation, music, mechanical theory and inventions, but despite his many achievements he was overshadowed by his contemporary Newton with whom he was unfortunately, constantly in dispute. Hooke claimed a role in some of Newton's discoveries but he was never able to back up his theories with mathematical proofs. Apparently there was at least one subject which he had not mastered.


1673 Between the years 1673 and 1686, German mathematician, diplomat and philosopher, Gottfried Wilhelm Leibniz, developed his theories of mathematical calculus publishing the first account of differential calculus in 1684 followed by the explanation of integral calculus in 1686. Unknown to him these techniques were also being developed independently by Newton. Newton got there first but Leibniz published first and arguments about priority raged for many years afterwards. Leibniz's notation has been adopted in preference to Newton's but the concepts are the same.

He also introduced the words function, variable, constant, parameter and coordinates to explain his techniques.


Leibniz was a polymath and another candidate for the title "The last man to know everything". As a child he learned Latin at the age of 8, Greek at 14 and in the same year he entered the University of Leipzig where he earned a Bachelors degree in philosophy at the age of 16, a Bachelors degree in law at 17 and Masters degrees in both philosophy and law at the age of 20. At 21 he obtained a Doctorate in law at Altdorf. In 1672 when he was 26, his diplomatic travels took him to Paris where he met Christiaan Huygens who introduced him to the mathematics of the pendulum and inspired him to study mathematics more seriously.


In 1679 Leibniz proposed the concept of binary arithmetic in a letter written to French mathematician and Jesuit missionary to China, Joachim Bouvet, showing that any number may be expressed by 0's and 1's only. Now the basis of digital logic and signal processing used in computers and communications.

Surprisingly Leibniz also suggested that God may be represented by unity, and "nothing" by zero, and that God created everything from nothing. He was convinced that the logic of Christianity would help to convert the Chinese to the Christian faith. He believed that he had found an historical precedent for this view in the 64 hexagrams of the Chinese I Ching or the Book of Changes attributed to China's first shaman-king Fuxi (Fu Hsi) dating from around 2800 B.C. and first written down as the now lost manual Zhou Yi in 900 B.C.. A hexagram consists of blocks of six solid or broken lines (or stalks of the Yarrow plant) forming a total of 64 possibilities. The solid lines represent the bright, positive, strong, masculine Yang with active power while the broken or divided lines represent the dark, negative, weak, feminine Yin with passive power. According to the I Ching, the two energies or polarities of the Yin and Yang are both opposing and complementary to each other and represent all things in the universe which is a progression of contradicting dualities.

Although the I Ching had more to do with fortune telling than with mathematics, there were other precedents to Leibniz's work. The first known description of a binary numeral system was made by Indian mathematician Pingala variously dated between the 5th century B.C. or the 2nd century B. C..


In 1671 Leibniz invented a 4 function mechanical calculator which could perform addition, subtraction, multiplication and division on decimal numbers which he demonstrated to the Royal Society in London in 1673 but they were not impressed by his crude prototype machine. (Pascal's 1642 calculator could only perform addition and subtraction.) It was not until 1676 that Leibniz eventually perfected it. His machine used a stepped cylinder to bring into mesh different gear wheels corresponding to the position of units, tens, hundreds etc. to operate on the particular digit as required. Strangely, as the inventor of binary arithmetic, he did not use it in his calculator.


His most famous philosophical proposition was that God created "the best of all possible worlds".


1681 French physicist and inventor Denis Papin invented the pressure release valve or safety valve to prevent explosions in pressure vessels. Although Papin is credited with the invention, safety valves had in fact been described by Glauber thirty years earlier, however Papin's valve was adjustable for different pressures by means of moving the lead weight along a lever which kept the valve shut. Papin's safety valve became a standard feature on steam engines saving many lives from explosions

The invention of the safety valve came as a result of his work with pressurised steam. In 1679 he had invented the pressure cooker which he called the steam digester.


Observing that the steam tended to lift the lid of his cooker in 1690 Papin also conceived the idea of using the pressure of steam to do useful work. He introduced a small amount of water into a cylinder closed by a piston. On heating the water to produce steam, the pressure of the steam would force the piston up. Cooling the cylinder again caused the steam to condense creating a vacuum under the piston which would pull it down (In fact the atmospheric pressure would push the piston down). This pumping action by a piston in a cylinder was the genesis of the reciprocating steam engine. Papin envisaged two applications for his piston engine. One was a toothed rack attached to the piston whose movement turned a gear wheel to produce rotary motion. The other was to use the reciprocating movements of the piston to move oars or paddles in a steam powered boat. Unfortunately he was unable to attract sponsors to enable him to develop these ideas. Papin was not the first to use a piston, von Guericke came before him, but he was the first to use it to capture the power of steam to do work.


In 1707, with the collaboration of Gottfried Leibniz (still smarting over his dispute with Isaac Newton), Papin published " The New Art of Pumping Water by Using Steam". The Papin / Leibniz pump had many similarities to Savery's 1698 water pump and their claims resulted in a protracted dispute involving the British Royal Society as to the true inventor of the steam driven water pump. Savery's pump did not use a piston but used a vacuum to draw water from below the pump and steam pressure to discharge it at a higher level. Papin's pump on the other hand used only steam pressure and could not draw water from a lower level. (See diagram of Papin's Steam Engine)

Unlike Savery's pump, Papin's pump used a closed cylinder, adjacent to (or even partially immersed in) the lower pool, fed with water from the pool through a non-return valve at the bottom of the cylinder. In the cylinder a free piston rested on the surface of the water which, at it's highest point, was level with the water in the pool. Steam from a separate boiler introduced above the piston forced it downwards displacing the water in the cylinder through another non-return valve at the bottom of the cylinder and upwards to the discharge level. Simply by exhausting the steam from the cylinder through a tap, the external water pressure would cause the cylinder to refill with water through the non-return valve at the base of the cylinder elevating the piston once more to the level of the surrounding water pool. Cooling was unnecessary since the design did not depend on creating a vacuum in the cylinder.

Papin also suggested a way of using his pump to create rotary motion. He proposed to feed the water raised by the pump over a waterwheel returning it to a lower reservoir in a closed loop system.


Like many gifted inventors Papin died destitute.


See more about Steam Engines

.

1687 "Philosophiae Naturalis Principia Mathematica" - Mathematical Principles of Natural Philosophy published by English physicist and mathematician Isaac Newton. One of the most important and influential books ever published, it was written in Latin and not translated into English until 1729.


By coincidence Newton was born in 1642, the year that Galileo died.

He made significant advances in the study of Optics demonstrating in 1672 that white light is made up from the spectrum of colours observed in the rainbow. He used a prism to separate white light into its constituent colour spectrum and by means of a second prism he showed that the colours could be recombined into white light.

In 1668 he designed and made the first known relecting telescope, based on a concave primary mirror and a flat secondary mirror.


He is perhaps best remembered however for his Mechanics, the Laws of Motion and Gravitation which his "Principia" contains.

Newton's Laws of Motion can be summarised as follows:

  • First Law: - Any object will remain at rest or in uniform motion in a straight line unless compelled to change by some external force.
  • Second Law: - The acceleration a of a body is directly proportional to, and in the same direction as, the net force F acting on it, and inversely proportional to its mass m. Thus, F = ma.
  • Third law: - To every action there is an equal and opposite reaction.

70 years earlier, Galileo came very close to developing these relationships but he had neither the mathematical tools nor the instruments to make precise measurements to prove his theories. Newton's first law is a restatement of Galileo's concept of inertia or resistance to change which he measured by its mass. See a Comparison of Galileo's and Newton's "Laws of Motion"


Newton also developed the Law of Universal Gravitation which states that any two bodies in the universe attract each other with a force that is directly proportional to the product of their masses and inversely proportional to the square of the distance between them. Thus:

F = G m1m2 / r2

Where:

F is force between the bodies

G is the Universal Gravitational Constant

m1 and m2 are the masses of the two bodies

r is the distance between the centres of the bodies


Newton was thus able to calculate or predict gravitational forces using the concept of action at a distance. He was also able to explain that the motion of tides was due to the varying effect on the oceans caused by the Earth's daily rotation as the distance between the Moon and the oceans changed as the oceans rotated through the constant gravitational field between the Earth and the Moon.

He did not discover gravity however, nor could he explain it. Galileo was well aware of the effects of gravity, and so was Huygens, a contemporary of Newton, who believed Descartes' earlier theory that gravity could be explained in mechanical terms as a high speed vortex in the aether which caused tiny particles to be thrown outwards by the centrifugal force of the vortex while heavier particles fell inwards due to balancing centripetal forces. Huygens never accepted Newton's inverse square law of gravity.

Newton's concept that planetary motion was due to gravity was completely new. Before that, the motion of heavenly bodies had been explained by Gilbert as well as his contemporary the German astronomer Kepler (1571-1630), and others as being due to magnetic forces.

Even now in the twenty first century, will still do not have a satisfactory explanation of the nature of gravitational forces.


Newton was the giant of the Scientific Revolution. He assimilated the advances made before him in mathematics, astronomy, and physics to derive a comprehensive understanding of the physical world. The impact of the publication of Newton's laws of dynamics on the scientific community was both profound and wide ranging. The laws and Newton's methods provided the basis on which other theories, such as acoustics, fluid dynamics, kinetic energy and work done were built as well as down to earth technical knowledge which enabled the building of the machines to power the Industrial Revolution and, at the other end of the spectrum, they explained the workings of the Universe.


However, of equal or even greater importance was the fact that Newton showed for the first time, the general principle that natural phenomena, events and time varying processes, not just mechanical motions, obey laws that can be represented by mathematical equations enabling analysis and predictions to be made. The laws of nature represented by the laws of mathematics, the foundation of modern science. The 3 volume publication was thus a major turning point in the development of scientific thought, sweeping away superstition and so called "rational deduction" as ways of explaining the wonders of nature.

Newton's reasoning was supported by his invention of the mathematical techniques of Differential and Integral Calculus and Differential Equations, actually developed in 1665 and 1666, twenty years before he wrote the "Principia" but not used in the proofs it contains. These were major advances in scientific knowledge and capability which extended the range of existing mathematical tools available for characterising nature and for carrying out scientific analysis.

See also Gregory's earlier contribution to calculus theory.


Newton engaged in a prolonged feud with Robert Hooke who claimed priority on some of Newton's ideas. Newton's oft repeated quotation "If I have seen further, it is by standing on the shoulders of giants." was actually written in a sarcastic letter to Hooke, who was almost short enough to be classified as a dwarf, with the implication that Hooke didn't qualify as one of the giants.


Leibniz working contemporaneously with Newton also developed techniques of differential and integral calculus and a dispute developed with Newton as to who was the true originator. Newton's discovery was made first, but Leibniz published his work before Newton. However there is no doubt that both men came to the ideas independently. Newton developed his concept through a study of tangents to a curve and also considered variables changing with time, while Leibniz arrived at his conclusions from calculations of the areas under curves and thought of variables x and y as ranging over sequences of infinitely close values.


Newton is revered as the founder of modern physical science, but despite the great fame he achieved in his lifetime, he remained a modest, diffident, private and religious man of simple tastes. He never married, devoting his life to science.


Newton didn't always have his head in the clouds. In his spare time, when he wasn't dodging apples, he invented the cat-flap.


1698 Searching for a method of replacing the manual or animal labour for pumping out the seeping water which gathered at the bottom of coal mines, English army officer Thomas Savery designed a mechanical, or more correctly, a hydraulic water pump powered by steam. He called the process "Raising Water by Fire". Savery was impressed by the great power of atmospheric pressure working against a vacuum as demonstrated by von Guericke's Magdeburg Hemispheres experiment. He realised that a vacuum could be produced by condensing steam in a sealed chamber and he used this principle as the basis for the first practical steam driven water pump which became known as "The Miner's Friend". Savery's pump did not produce any mechanical motion but used atmospheric pressure to force the water up a vertical pipe from a well or pond below, to fill the vacuum in the steam chamber above, and steam pressure to drive the water in the steam chamber up a vertical discharge pipe to a level above the steam chamber.


(See diagram of Savery's Steam Engine)


The essential components of the pump were a boiler producing steam, a steam chamber at the heart of the system and suction and discharge water pipes each containing a non-return flap valve he called a clack.


Starting with some water in the steam chamber, the steam valve from the boiler is opened introducing steam into the steam chamber where the pressure of the steam forces the water out through a non-return flap valve into the discharge pipe. The head of water in the discharge pipe keeps the flap valve closed so the water can not return into the steam chamber. The steam supply to the chamber is then turned off and the chamber is cooled from the outside with cold water which causes the steam in the chamber to condense creating a vacuum in the chamber. The vacuum in turn causes water to be sucked up from the well or lower pond through another flap valve in the induction pipe into the steam chamber. The head of water in the steam chamber keeps the flap valve closed so that the water can not flow back to the well. Once the chamber is full, steam is fed once more into the chamber and the cycle starts again.


Efficiency was improved by using two parallel steam chambers alternately such that one of the chambers was charged with steam while the other chamber was cooled. The theoretical maximum depth from which Savery's engine can draw water is limited by the atmospheric pressure which can support a head of 32 feet (10 M) but because of leaks the practical limit is about 25 feet. In a mine this would require the engine to be below ground close to the water level, but as we know, fire and coal mines don't mix. On the discharge side the maximum height to which the water can be raised is limited by the available steam pressure and also by the safety of the pressure vessels whose solder joints are particularly vulnerable, a serious drawback with the available 17th century technology.


See more about Steam Engines.


1700 At the instigation of Leibniz, King Frederick I of Prussia founded the German Academy of Sciences in Berlin to rival Britain's Royal Society and the French Académie des Sciences. Leibniz was appointed as its first president


1701 English gentleman farmer Jethro Tull, developed the seed drill, a horse-drawn sowing device which mechanised the planting of seeds, precisely positioning them in the soil and then covering them over. It thus enabled better control of the distribution and positioning of the seeds leading to improvements of up to nine times in crop yields per acre (or hectare). For the farm hand, the seed drill cut out some of the back-breaking work previously employed in the task but the downside was that it also reduced the number of farm workers needed to plant the crop. The seed drill was a relatively simple device which could be made by local carpenters and blacksmiths. Its combined benefits of higher crop yields and productivity improvements were the first steps in mechanised farming which revolutionised British agriculture.

The design concept was not new since similar devices had been used in Europe in the middle ages. Single tube seed drills were also known to have been used in Sumeria in Mesopotamia, now (modern day Iraq) during the Late Bronze Age (1500 B.C.) and multi-tube drills were used in China during the Qin Dynasty.


The introduction of Tull's improved seed drill was an early example of the mechanisation of manual labour tasks which ushered in the Industrial Revolution in Britain.


1705 Head of demonstrations at the Royal Society in London, English physicist and instrument maker appointed by Isaac Newton, Francis Hauksbee the Elder demonstrated an electroluminescent glow discharge lamp which gave off enough light to read by. It was based on von Guericke's electric generator with an evacuated glass globe, containing mercury, replacing the sulphur ball. It produced a glow when he rubbed the spinning globe with his bare hands. The blue light it produced seemed to be alive and was considered at the time to be the work of God. Like von Guericke, Hauksbee never realised the potential of electricity. Instead, electric phenomena were for many years the tool of conjurors and magicians who entertained people at parties with mild electric shocks, producing sparks or miraculously picking up feathers.


1709 Abraham Darby, from a Quaker family in Bristol established an iron making business at Coalbrookdale in Shropshire introducing new production methods which revolutionised iron making. He already had a successful brass ware business in Bristol employing casting and metal forming technologies he had learned in the Netherlands and in 1708 he had patented the use of sand casting which he realised was suitable for the mass production of cheaper iron pots for which there was a ready market. The purpose of his move to Coalbrookdale which already had a long established iron making industry was to apply these technologies and his metallurgical knowledge to the iron making business to produce cast iron kettles, cooking pots, cauldrons, fire grates and other domestic ironware with intricate shapes and designs.

Early blast furnaces used charcoal as the source of the carbon reducing agent in the Iron smelting process, but Darby investigated a the use of different fuels to reduce costs. This was partially out of necessity since the surrounding countryside had been denuded of trees to produce charcoal to fuel the local iron making blast furnaces, but there was still a plentiful local supply of coal as well as Iron ore and limestone. He experimented with using coal instead of charcoal but the high sulphur content of coal made the iron too brittle. His greatest breakthrough was the use of coke, instead of charcoal, which produced higher quality iron at lower cost. It could also be made in bigger blast furnaces, permitting economies of scale.

See the following Footnote about Iron and Steel Making.


Abraham Darby founded a dynasty of iron makers. His son, Abraham Darby II, expanded the output of the Coalbrookdale ironworks to include iron wheels and rails for horse drawn wagon ways and cylinders for the steam engines recently invented by Newcomen some of which he used himself to pump water supplying his water wheels. His grandson, Abraham Darby III, continued in the business and was the promoter responsible for building the world's first iron bridge at Coalbrookdale.


The mass production of low cost ironware made possible by Abraham Darby's iron making process was a major foundation stone on which the subsequent industrialisation of Britain and the Industrial Revolution were based.


  • Footnote
  • Some Key Iron and Steel Making Processes

    • Smelting is the high temperature process of extracting Iron or other base metals such as Gold, Silver and Copper from their ores. The principle behind the Iron making or smelting process is the chemical reduction of the iron ores which are composed of iron oxides, mainly FeO, Fe2O3, and Fe3O4 by heating them in a furnace, together with Carbon where the Carbon burns to form Carbon monoxide (CO), which then acts as the reducing agent in the following typical reaction. The process itself is exothermic which helps to maintain the reaction once it is started.
    • 2C + O2 →   2CO

        Fe2O3 + 3CO →   2Fe + 3CO2

      In early times the carbon was supplied in the form of charcoal. Nowadays coke is used instead. Iron ore however contains a variety of unwanted impurities which affect the properties of the finished iron in different ways and so must be removed from the ore or at least controlled to an acceptable level. A flux such as limestone is often used for this "cleaning" purpose. By combining with the impurities it forms a slag which floats to the top and can be removed from the melt.

    • Casting is the process of pouring molten Iron or steel into a mould and allowing to solidify. It is an inexpensive method of producing metal components in intricate shapes or simple ingots. Moulds must be able to withstand high temperatures and are usually made from sand with a clay bonding agent to hold it together. The cavity in the mould is formed around a wooden pattern which is removed before pouring in the hot metal.
    • Forging is the process of shaping malleable metals into a desired form by means of compressive forces. It was a skill used for many centuries by blacksmiths who heated the metal in a forge to soften it, then beat it into shape using a hammer. Modern day forging uses machines such as large drop-forging hammers, rolling mills, presses and dies to provide the necessary compression of the work piece. Because these machines can exert very high forces on the work piece, it is also possible to work with cold, unheated metals in some applications. The forging process is not suitable for shaping cast iron because it is brittle and likely to shatter.
    • Swaging is a special case of forging, often cold forging, to form metal, usually into long shapes such as tubes, channels or wires by forcing or pulling the workpiece through a die or between rolls. It is also the method used to form a lip on the edge of sheet steel to provide stability or safety from injury from sharp metal edges.
    • See how gun barrels were manufactured by swaging.

    • Heat Treatment
    • Heat treatment is the black art practiced by blacksmiths for hundreds of years of manipulating the properties of steel to suit different applications. These are the tools they have used.

      In its simplest form, steel is an alloy of iron and Carbon and these two elements can exist in several phases which can change with temperature. The mechanical properties of the steel depend on the carbon content and on the structure of the alloy phases present. Heat treatment is concerned with controlling the phases of the alloy to achieve the desired mechanical properties. There are two critical temperatures between which phase changes occur, namely 700°C and 900°C

      The basic phases and phase changes in normal cast steel are as follows:

      • Steel at normal working temperature (below 700°C) is made up from pearlite which is a mixture of cementite and ferrite (iron). Iron on its own is very soft.
      • Cementite is a name given to the very hard and brittle iron carbide Fe3C which is iron chemically combined with carbon.
      • Above the critical temperature of 700°C a structural change takes place in the alloy and the Carbon in the pearlite dissolves into the iron to form austenite which is a hard and non-magnetic, solid solution of Carbon in iron.
      • If the temperature of the steel cools normally below the 700°C critical temperature, the transformation is reversed and the slow cooling austenite is transformed back into pearlite.
      • If however the austenite is cooled very quickly by suddenly quenching it in cold water or other cold fluid, the transformation does not have time to take place before the temperature of the alloy falls below the critical temperature. The lower transformation temperature thus prevents the transformation to pearlite and instead tends to freeze the composition of the austenite at a temperature below the crtitical temperature. This transforms the ferrite solution into very hard martensite in which the ferrite is supersaturated with carbon. Martensite is too hard and brittle for most applications.
      • Quenching at intermediate temperatures results in a mix of martensite and pearlite leaving the steel with an intermediate hardness level.

      These transformations are exploited in the following processes:

    • Hardening - Steel can be hardened by heating it to above the crtitical temperature and suddenly quenching it in a cold liquid to produce martensite
    • Annealing - Steel can be softened to make it more workable by heating it to above the critical temperature to form austenite, then letting it cool down slowly to form pearlite. This process is also used to relieve work hardening stresses and crystal dislocations caused during machining or forming processes on the steel.
    • Tempering - The level of hardness or maleability of the steel can be set at any intermediate level between the extremes of the hard martensite and the soft pearlite to produce steel with properties tailored for different applications, from cutting tools to springs, by quenching the steel at the appropriate temperature. Starting with hard martensite, the temperature is gradually increased so that it is partially changed back to pearlite reducing its hardness and increasing its toughness. The workpiece is quenched or allowed to cool naturally when the desired temperature has been reached.

    The traditional method used for centuries for judging the temperature at which quenching should occur was by means of colour changes on the polished surface of the steel as it is heated. As the steel is heated an oxide layer forms on its surface causing thin-film interference which shows up as a specific colour depending on the thickness of the layer. As the temperature increases the thickness of the oxide layer increases and the colour changes correspondingly so that for very hard tool steel the workpiece is quenched when the colour is in the light to dark straw range (corresponding to 230°C to 240°C), whereas for spring steel the steel may be quenched when the colour is blue (300°C). Nowadays, for major tempering processes the temperature is measured by infrared thermometers or other instruments however the traditional method is still widely used for small jobs.

    • Case Hardening
    • It is difficult to achieve both extreme hardness and extreme toughness in homogeneous alloys. Case hardening is a method of obtaining a thin layer of hard (high carbon) steel on the surface of a tough (low carbon) steel object while retaining the toughness of its body. Essentially a development of the ancient cementation process for carbonising iron, it involves the diffusing of Carbon into the outer layer of the steel at high temperature in a carbon rich environment for a pre-determined period and then quenching it so that the Carbon structure is locked in.


    Summary of Iron and Steel Making Processes and What They Do

      • Bloomery - Low temperature furnace. Converts iron ore into wrought iron.
      • Cementation Process - Low temperature furnace. Converts wrought iron into steel by diffusion of carbon.
      • Blast Furnace - High temperature furnace. Converts Iron ore into pig Iron.
      • Puddling - High temperature furnace. Converts pig Iron into wrought Iron.
      • Casting - High temperature furnace. Moulds molten Iron and steel output into useful shapes.
      • Forging - Mechanical process. Forms steel ingots into useful shapes.
      • Heat Treatment - Low temperature process. Changes the mechanical properties of the steel.
      • Crucible Process - High temperature, low volume process. Purifies and strengthens low quality steel. Also used to create special steels and alloys.
      • Bessemer Converter - High temperature furnace. Converts pig iron into steel
      • Open Hearth (Siemens) Furnace - High temperature furnace. Converts pig Iron and scrap Iron into steel.
      • Electric Arc Furnace - Converts scrap Iron and steel into steel.

    Iron and Steel Properties

    • Wrought Iron
    • Wrought iron was initially developed by the Hittites around 2000 B.C.. In early times in Europe the smelting process was carried out by the village blacksmith in a simple chimney shaped furnace, constructed from clay or stone with a clay lining, called a bloomery. Gaps around the base allowed air to be supplied by means of a bellows blowing the air through a tuyère into the furnace. Charcoal was both the initial heat source and the Carbon reducing agent for extracting the Iron from the ore. Once the furnace was started the Iron ore and more charcoal were loaded from the top to start and maintain the chemical reaction. It was not usually possible with this method to achieve a temperatures as high as 1300°C, the melting point of iron, but it was sufficient to heat up the iron ore to a spongy mass called a bloom, separating the Iron the from the majority of impurities in the Iron ore but leaving some glassy silicates included in the Iron. If the furnace temperature was allowed to get too high the bloom could melt and carbon could dissolve into the Iron giving it the unwanted properties of cast iron.

      Once the reduction process was complete the bloom was removed from the furnace and by heating and hammering it, the impurities were forced out but some of the silicates remained as slag, which was mainly calcium silicate, CaSiO3, in fibrous inclusions in the Iron creating wrought iron (from "wrought" meaning "worked"). Wrought iron has a very low carbon content of around 0.05% by weight with good tensile strength and shock resistance but is poor in compression and the slag inclusions give the Iron a typical grained appearance. Being relatively soft, it is ductile, malleable and easy to work and can be heated and forged into shape by hammering and rolling. It is also easy to weld.

      Because of the manual processes involved, wrought Iron could only be made in batches and manufacturing was very costly and difficult to mechanise.


    • Cast Iron
    • Cast Iron was first produced by the Chinese in the fifth century B.C.. The process of smelting iron ore to produce cast iron needs to operate at at temperatures of 1600°C or more, sufficient to melt the iron. To produce the higher temperatures the bloomery furnace technique was upgraded to a blast furnace by increasing the rate of oxygen supply to the melt by means of a blowing engine or air pump which blasted the air into the bottom of a cone shaped furnace. Early blowing engines were powered by waterwheels but these were superseded by steam engines once they became available. To remove or reduce the impurities present in the ore, limestone (CaCO3), known as the flux was added to the charge which was continuously fed into the furnace from above. At the high temperatures in the furnace the limestone reacts with silicate impurities to form a molten slag which floats on top of the denser Iron which sinks to the narrow bottom part of the cone where it can be run off through a channel into moulded depressions in a bed of sand. The slag is similarly run off separately from the top of the melt. Because metal ingots created in the moulds which receive molten Iron from the runner resembled the shape of suckling pigs, the Iron produced this way is known as pig Iron. An important feature of the blast furnace is that it enables cast Iron to be made in a continuous process, greatly reducing the labour costs. Stopping, cooling and restarting a blast furnace however involves a major refurbishment of the furnace to get it back into operation agin and great efforts are usually made to avoid such a disruption.


      Iron produced in this way has a crystalline structure and contains 4% to 5% Carbon. The presence of the Carbon atoms impedes the ability of the dislocations in the crystal lattice of the Iron atoms from sliding past one another thus increasing its hardness. Pig Iron is so very hard and brittle, and very difficult to work that it is almost useless. It is however reprocessed and used as an intermediate material in the production of commercial iron and steel by reheating to reduce the Carbon content further or combining the ingots with other materials or even scrap iron to change its properties. Iron with Carbon content reduced to 2% to 4% is called cast Iron. It can be used to create intricate shapes by pouring the molten metal into moulds and it is easier to work than pig Iron but still relatively hard and brittle. While strong in compression cast Iron has poor tensile strength and is prone to cracking which makes it unable to tolerate bending loads.


    • Steel
    • Steel is Iron after the removal of most of the impurities such as silica, Phosphorous, Sulphur and excess Carbon which severely weaken its strength. It may however have other elements, which were not present in the original ore, added to form alloys which enhance specific properties of the steel. Steel normally has a Carbon content of 0.25% to 1.5%, slightly higher than wrought Iron but it does not have the silicate inclusions which are characteristic of wrought Iron. Removing the impurities retains the malleability of wrought Iron while giving the steel much greater load-bearing strength but is an expensive and difficult task.

      Cast steel can be made by a variety of processes including crucible steel, the Bessemer converter and the open hearth method and thus may have a range of properties. See steelmaking summary above.

      Other alloying elements such as Manganese, Chromium, Vanadium and Tungsten may be added to the mix to create steels with particular properties for different applications. By controlling the Carbon content of the steel as well as the percentage of different alloying materials, steel can be made with a range of properties. Examples are:

      • Blister Steel was a crude form of steel made by the cementation process, an early method of hardening wrought Iron. It is now obsolete.
      • Mild steel the most common form of steel which contains about 0.25% Carbon making it ductile and malleable so that it can be rolled or pressed into complex forms suitable for automotive panels, containers and metalwork used in a wide variety consumer products
      • High carbon steel or tool steel with about 1.5% Carbon which makes it relatively hard with the ability to hold an edge. The more the Carbon content, the greater the hardness
      • Stainless steel which contains Chromium and Nickel which make it resistant to corrosion
      • Titanium steel which keeps its strength at high temperatures
      • Manganese steel which is very hard and used for rock breaking and military armour
      • Spring steel with various amounts of Nickel and other elements to give it very high yield strength
      • As well as others specialist steels such as steels optimised for weldability

      Mild steel has largely replaced wrought Iron which is no longer made in commercial quantities, though the term is often applied incorrectly to craft made products such as railings and garden furniture which are actually made from mild steel.


    Iron and Steelmaking Development Timeline

    Steel making has gone through a series of developments to achieve ever more precise control of the process as well as better efficiency.


1350 Around this time the first blast furnaces for smelting Iron from its ore begin to appear in Europe, 1800 years after the Chinese were using the technique.


See more about Cast Iron and Steel.


1368-1644 China's Ming dynasty. When the Ming dynasty came into power, China was the most advanced nation on Earth. During the Dark Ages in Europe, China had already developed cast Iron, the compass, gunpowder, rockets, paper, paper money, canals and locks, block printing and moveable type, porcelain, pasta and many other inventions centuries before they were "invented" by the Europeans. From the first century B.C. they had also been using deep drilling to extract petroleum from the underlying rocks. They were so far ahead of Europe that when Marco Polo described these wondrous inventions in 1295 on his return to Venice from China he was branded a liar. China's innovation was based on practical inventions founded on empirical studies, but their inventiveness seems to have deserted them during the Ming dynasty and subsequently during the Qing (Ching) dynasty (1644 - 1911). China never developed a theoretical science base and both the Western scientific and industrial revolutions passed China by. Why should this be?


It is said that the answer lies in Chinese culture, to some extent Confucianism but particularly Daoism (Taoism) whose teachings promoted harmony with nature whereas Western aspirations were the control of nature. However these conditions existed before the Ming when China's innovation led the world. A more likely explanation can be found in China's imperial political system in which a massive society was rigidly controlled by all-powerful emperors through a relatively small cadre of professional administrators (Mandarins) whose qualifications were narrowly based on their knowledge of Confucian ideals. If the emperor was interested in something, it happened, if he wasn't, it didn't happen.

The turning point in China's technological dominance came when the Ming emperor Xuande came to power in 1426. Admiral Zheng He, a muslim eunuch, castrated as a boy when the Chinese conquered his tribe, had recently completed an audacious voyage of exploration on behalf of a previous Ming emperor Yongle to assert China's control of all of the known world and to extract tributary from its intended subjects. But his new master considered the benefits did not justify the huge expense of Zheng's fleet of 62 enormous nine masted junks and 225 smaller supply ships with their 27,000 crew. The emperor mothballed the fleet and henceforth forbade the construction of any ships with more than two masts, curbing China's aspirations as a maritime power and putting an end to its expansionist goals, a xenophobic policy which has lasted until modern times.

The result was that during both the Ming and the Qing dynasties a succession of complacent, conservative emperors cocooned in prodigious, obscene wealth, remote even from their own subjects, lived in complete isolation and ignorance of the rest of the world. Foreign influences, new ideas, and an independent merchant class who sponsored them, threatened their power and were consequently suppressed. By contrast the West was populated by smaller, diverse and independent nations competing with each other. Merchant classes were encouraged and innovation flourished as each struggled to gain competitive or military advantage.


Times have changed. Currently China is producing two million graduates per year, sixty percent of which are in science and technology subjects, three times as many as in the USA.

After Japan, China is the second largest battery producer in the world and growing fast.


1450 German goldsmith and calligrapher Johann Genstleisch zum Gutenberg from Mainz invented the printing press, considered to be one of the most important inventions in human history. For the first time knowledge and ideas could be recorded and disseminated to a much wider public than had previously been possible using hand written texts and its use spread rapidly throughout Europe. Intellectual life was no longer the exclusive domain of the church and the court and an era of enlightenment was ushered in with science, literature, religious and political texts becoming available to the masses who in turn had the facility to publish their own views challenging the status quo. It was the ability to publish and spread one's ideas that enabled the Scientific Revolution to happen. Nowadays the Internet is bringing about a similar revolution.


Although it was new to Europe, the Chinese had already invented printing with moveable type four hundred years earlier but, because of China's isolation, these developments never reached Europe.


Gutenberg printed Bibles and supported himself by printing indulgences, slips of paper sold by the Catholic Church to secure remission of the temporal punishments in Purgatory for sins committed in this life. He was a poor businessman and made little money from his printing system and depended on subsidies from the Archbishop of Mainz. Because he spent what little money he had on alcohol, the Archbishop arranged for him to be paid in food and lodging, instead of cash. Gutenberg died penniless in 1468.


1474 The first patent law, a statute issued by the Republic of Venice, provided for the grant of exclusive rights for limited periods to the makers of inventions. It was a law designed more to protect the economy of the state than the rights of the inventor since, as the result of its declining naval power, Venice was changing its focus from trading to manufacturing. The Republic required to be informed of all new and inventive devices, once they had been put into practice, so that they could take action against potential infringers.


1478 After 10 years working as an apprentice and assistant to successful Florentine artist Andrea del Verrocchio at the court of Lorenzo de Medici in Florence, at the age of 26 Leonardo da Vinci left the studio and began to accept commissions on his own.

One of the most brilliant minds of the Italian Renaissance, Leonardo was hugely talented as an artist and sculptor but also immensely creative as an engineer, scientist and inventor. The fame of his surviving paintings has meant that he has been regarded primarily as an artist, but his scientific insights were far ahead of their time. He investigated anatomy, geology, botany, hydraulics, acoustics, optics, mathematics, meteorology, and mechanics and his inventions included military machines, flying machines, and numerous hydraulic and mechanical devices.


He lived in an age of political in-fighting and intrigue between the independent Italian states of Rome, Milan, Florence, Venice and Naples as well as lesser players Genoa, Siena, and Mantua ever threatening to degenerate into all out war, in addition to threats of invasion from France. In those turbulent times da Vinci produced a series of drawings depicting possible weapons of war during his first two years as an independent. Thus began a lifelong fascination with military machines and mechanical devices which became an important part of his expanding portfolio and the basis for many of his offers to potential patrons, the heads of these belligerent, or fearful, independent states.

Despite his continuing interest in war machines, he claimed he was not a war monger and he recorded several times in his notebooks his discomfort with designing killing machines. Nevertheless, he actively solicited such commissions because by then he had his own pupils and needed the money to pay them.


Most of Leonardo's designs were not constructed in his lifetime and we only know about them through the many models he made but mostly from the 13,000 pages of notes and diagrams he made in which he recorded his scientific observations and sketched ideas for future paintings, architecture, and inventions. Unlike academics today who rush into publication, he never published any of his scientific works, fearing that others would steal his ideas. Patent law was still in its infancy and difficult, if not impossible, to enforce. Such was his paranoia about plagiarism that he even wrote all of his notes, back to front, in mirror writing, sometimes also in code, so he could keep his ideas private. He was not however concerned about keeping the notes secret after his death and in his will he left all his manuscripts, drawings, instruments and tools to his loyal pupil, Francesco Melzi with no objection to their publication. Melzi expected to catalogue and publish all of Leonardo's works but he was overwhelmed by the task, even with the help of two full-time scribes, and left only one incomplete volume, "Trattato della Pintura" or "Treatise on Painting", about Leonardo's paintings before he himself died in 1570. On his death the notes were inherited by his son Orazio who had no particular interest in the works and eventually sections of the notes were sold off piecemeal to treasure seekers and private collectors who were interested more in Leonardo's art rather than his science.


Because of his secrecy, his contemporaries knew nothing of his scientific works which consequently had no influence on the scientific revolution which was just beginning to stir. It was about two centuries before the public and the scientific community began gradually to get access to Leonardo's scientific notes when some collectors belatedly allowed them to be published or when they ended up on public display in museums where they became the inspiration for generations of inventors. Unfortunately, only 7000 pages are known to survive and over 6000 pages of these priceless notebooks have been lost forever. Who knows what wisdom they may have contained?


Leonardo da Vinci is now remembered as both "Leonardo the Artist" and "Leonardo the Scientist" but perhaps "Leonardo the Inventor" would be more apt as we shall see below.


Leonardo the Artist

It would not do justice to Leonardo to mention only his scientific achievements without mentioning his talent as a painter. His true genius was not as a scientist or an artist, but as a combination of the two: an "artist-engineer".

He did not sign his paintings and only 24 of his paintings are known to exist plus a further 6 paintings whose authentication is disputed. He did however make hundreds of drawings most of which were contained in his copious notes.

  • The "Treatise on Painting"
  • This was the volume of Leonardo's manuscripts transcribed and compiled by Melzi. The engravings needed for reproducing Leonardo's original drawings were made by another famous painter, Nicolas Poussin. As the title suggests it was intended as technical manual for artists however it does contain some scientific notes about light, shade and optics in so far as they affect art and painting. For the same reason it also contains a small section of Leonardo's scientific works about anatomy. The publication of this volume in 1651 was the first time examples of the contents of Leonardo's notebooks were revealed to the world but it was 132 years after his death. The full range of his "known" scientific work was only made public little by little many years later.


Leonardo was one of the world's greatest artists, the few paintings he made were unsurpassed and his draughtsmanship had a photographic quality. Just seven examples of his well known artworks are mentioned here.

  • Paintings
    • The "Adoration of the Magi" painted in 1481.
    • The "Virgin of the Rocks" painted in 1483.
    • "The Last Supper" a large mural 29 feet long by 15 feet high (8.8 m x 4.6 m) started in 1495 which took him three years to complete.
    • The "Mona Lisa" (La Gioconda) painted in 1503.
    • "John the Baptist" painted in 1515.
  • Drawings
    • The "Vitruvian Man" as described by the Roman architect Vitruvius was drawn in 1490, showing the correlation between the proportions of the ideal human body with geometry, linking art and science in a single work.
    • Illustrations for mathematician Fra Luca Pacioli's book "De divina proportione" (The Divine Proportion), drawn in 1496. See more about The Divine Proportion.

Leonardo the Scientist

The following are some examples of the extraordinary breadth of da Vinci's scientific works

  • Military Machines
  • After serving his apprenticeship with Verrocchio, Leonardo had a continuous flow of military commissions throughout his working life.

    In 1481 he wrote to Ludovico Sforza, Duke of Milan with a detailed C. V. of his military engineering skills, offering his services as military engineer, architect and sculptor and was appointed by him the following year. In 1502 the ruthless and murderous Cesare Borgia, illegitimate son of Pope Alexander VI and seducer of his own younger sister (Lucrezia Borgia), appointed Leonardo as military engineer to his court where he became friends with Niccolo Machiavelli, Borgia's influential advisor. In 1507 some time after France had invaded and occupied Milan he accepted the post of painter and engineer to King Louis XII of France in Milan and finally in 1517 he moved to France at the invitation of King Francoise I to take up the post of First Painter, Engineer and Architect of the King. These commissions gave Leonardo ample scope to develop his interest in military machines.


    Leonardo designed war machines for both offensive and defensive use. They were designed to provide mobility and flexibility on the battlefield which he believed was crucial to victory. He also designed machines to use gunpowder which was still in its infancy in the fifteenth century.


    His military inventions included:

    • Mobile bridges including drawbridges and a swing bridge for crossing moats, ditches and rivers. His swing bridge was a cantilever design with a pivot on the river bank a counterweight to facilitate manoeuvring the span over the river. It also had wheels and a rope-and-pulley system which enabled easy transport and quick deployment.
    • Siege machines for storming walls.
    • Chariots with scythes mounted on the sides to cut down enemy troops.
    • A giant crossbow intended to fire large explosive projectiles several hundred yards.
    • Trebuchets - Very large catapults, based on releasing mechanical counterweights, for flinging heavy projectiles into enemy fortifications.
    • Bombards - Short barrelled, large-calibre, muzzle-loading, heavy siege cannon or mortars, fired by gunpowder and used for throwing heavy stone balls. The modern replacement for the trebuchet. Leonardo's design had adjustable elevation. He also envisaged exploding cannonballs, made up from several smaller stone cannonballs sewn into spherical leather sacks and designed to injure and kill many enemies at one time. We would now call these cluster bombs.
    • Springalds - Smaller, more versatile cannon, for throwing stones or Greek fire, with variable azimuth and elevation adjustment so that they could be aimed more precisely.
    • A series of guns and cannons with multiple barrels. The forerunners of machine guns.
    • They included a triple barrelled cannon and an eight barrelled gun with eight muskets mounted side by side as well as a 33 barrelled version with three banks of eleven muskets designed to enable one set of eleven guns to be fired while a second set cooled off and a third set was being reloaded. The banks were arranged in the form of a triangle with a shaft passing through the middle so that the banks could be rotated to bring the loaded set to the top where it could be fired again.

    • A four wheeled armoured tank with a heavy protective cover reinforced with metal plates similar to a turtle or tortoise shell with 36 large fixed cannons protruding from underneath. Inside a crew of eight men operating cranks geared to the wheels would drive the tank into battle. The drawing in Leonardo's notebook contains a curious flaw since the gearing would cause the front wheels to move in the opposite direction from the rear wheels. If the tank was built as drawn, it would have been unable to move. It is possible that this simple error would have escaped Leonardo's inventive mind but it is also suggested that like his coded notes, it was a deliberate fault introduced to confuse potential plagiarists. The idea that this armoured tank loaded with 36 heavy cannons in such a confined space could be both operated and manoeuvred by eight men is questionable.
    • Automatic igniting device for firearms.
  • Marine Warfare Machines and Devices
  • Leonardo also designed machines for naval warfare including:

    • Designs for a peddle driven paddle boat. The forerunner of the modern pedalo.
    • Hand flippers and floats for walking on water.
    • Diving suit to enable enemy vessels to be attacked from beneath the water's surface by divers cutting holes below the boat's water line. It consisted of a leather diving suit equipped with a bag-like helmet fitting over the diver's head. Air was supplied to the diver by means of two cane tubes attached to the headgear which led up to a cork diving bell floating on the surface.
    • A double hulled ship which could survive the exterior skin being pierced by ramming or underwater attack, a safety feature which was eventually adopted in the nineteenth century.
    • An armoured battleship similar to the armoured tank which could ram and sink enemy ships.
    • Barrage cannon - a large floating circular platform with 16 canons mounted around its periphery. It was powered and steered by two operators turning drive wheels geared to a large central drive wheel connected to paddles for propelling it through the water. Others operators fired the cannons.
  • Flying Machines
  • Leonardo studied the flight of birds and after the legendary Icarus was one of the first to attempt to design human powered flying machines, recording his ideas in numerous drawings. A step up from Chinese kites.

    His drawings included:

    • A design for a parachute. The world's first.
    • Various gliders
    • Designs for wings intended to carry a man aloft, similar to scaled up bat wings.
    • Human powered flying machines known as ornithopters, (from Greek ornithos "bird" and pteron "wing"), based on flapping wings operated by means of levers and cables.
    • A helical air screw with its central shaft powered by a circular human treadmill intended to lift off and fly like a modern helicopter.
  • Civil Works
  • Leonardo designed many civil works for his patrons and also the equipment to carry them out.

    These included:

    • A crane for excavating canals, a dredger and lock gates designed with swinging gates rather than the lifting doors of the "portcullis" or "guillotine" designs which were typically used at the time. Leonardo's gates also contained smaller hatches to control the rate of filling the lock to avoid swamping the boats.
    • Water lifting devices based on the Archimedes screw and on water wheels
    • Water wheels for powering mechanical devices and machines.
    • Architecture: Leonardo made many designs for buildings, particularly cathedrals and military structures, but none of them were ever built.
    • When Milan, with a population of 200,000 living in crowded conditions, was beset by bubonic plague Leonardo set about designing an a more healthy and pleasant ideal city. It was to be built on two levels with the upper level reserved for the householders with living quarters for servants and facilities for deliveries on the lower level. The lower level would also be served by covered carriageways and canals for drainage and to carry away sewage while the residents of the upper layer would live in more tranquil, airy conditions above all this with pedestrian walkways and gardens connecting their buildings.
    • Leonardo produced a precision map of Imola, accurate to a few feet (about 1 m) based on measurements made with two variants of an odometer or what we would call today a surveyor's wheel which he designed and which he called a cyclometer. They were wheelbarrow-like carts with geared mechanisms on the axles to count the revolutions of the wheels from which the distance could be determined. He followed up with physical maps of other regions in Italy.
  • Tools and Instruments
  • The following are examples of some of the tools and scientific instruments designed by da Vinci which were found in his notes.

    • Solar Heating - In 1515 when he worked at the Vatican, Leonardo designed a system of harnessing solar energy using a large concave mirror, constructed from several smaller mirrors soldered together, to focus the Sun's rays to heat water.
    • Improvements to the printing press to simplify its operation so that it could be operated by a single worker.
    • Anemometer - It consisted of a horizontal bar from which was suspended a rectangular piece of wood by means of a hinge. The horizontal bar was mounted on two curved supports on which a scale to measure the rotation of the suspended wood was marked. When the wind blew, the wood swung on its hinge within the frame and the extent of the rotation was noted on the scale which gave an indication of the force of the wind.
    • A 13 digit decimal counting machine - Based on a gear train and often incorrectly identified as a mechanical calculator.
    • Clock - Leonardo was one of the early users of springs rather than weights to drive the clock and to incorporate the fusée mechanism, a cone-shaped pulley with a helical groove around it which compensated for the diminishing force from the spring as it unwound. His design had two separate mechanisms, one for minutes and one for hours as well as an indication of phases of the moon.
    • He also designed numerous machines to facilitate manufacturing including a water powered mechanical saw, horizontal and vertical drilling machines, spring making machines, machines for grinding convex lenses, machines for grinding concave mirrors, file cutting machines, textile finishing machines, a device for making sequins, rope making machines, lifting hoists, gears, cranks and ball bearings.
    • Though drawings and models exist, the claim that Leonardo invented the bicycle is thought by many to be a hoax. The rigid frame had no steering mechanism and it is impossible to ride.
  • Theatrical Designs
    • Leonardo was often in demand for designing theatrical sets and decorations for carnivals and court weddings.
    • He also built automata in the form of robots or animated beasts whose lifelike movements were created by a series of springs, wires, cables and pulleys.
    • His self propelled cart, powered by a spring, was used to amaze theatre audiences.
    • He designed musical instruments including a lyre, a mechanical drum, and a viola organista with a keyboard. This latter instrument consisted of a series of strings each tuned to a different pitch. A bow in the form of a continuously rotating loop perpendicular to the strings was stretched between two pulleys mounted in front of the strings. The keys on the keyboard were each associated with a particular string and when a key was pressed a mechanism pushed the bow against the corresponding string to play the note.
  • Anatomy
  • As part of his training in Veroccio's studio, like any artist, Leonardo studied anatomy as an aid to figure drawing, however starting around 1487 and later with the doctor Marcantonio della Torre he made much more in depth studies of the body, its organs and how they function.

    • During his studies Leonardo had access to 30 corpses which he dissected, removing their skin, unravelling intestines and making over 200 accurate drawings their organs and body parts.
    • He made similar studies of other animals, dissecting cows, birds, monkeys, bears, and frogs, and comparing their anatomical structure with that of humans.
    • He also observed and tried to comprehend the workings of the cardiovascular, respiratory, digestive, reproductive and nervous systems and the brain without much success. He did however witness the killing of a pig during a visit to an abattoir. He noticed that when a skewer was thrust into its heart, that the beat of the heart coincided with the movement of blood into the main arteries. He understood the mechanism of the heart if not the function, predating by over 100 years, the conclusions of Harvey about its function.

    Because the bulk of his work was not published for over 200 years, his observations could possibly have prompted an earlier advance in medical science had they been made available during his lifetime. At least his drawings provided a useful resource for future students of anatomy.

  • Scientific Writings
  • Leonardo had an insatiable curiosity about both nature and science and made extensive observations which were recorded in his notebooks.

    They included:

    • Anatomy, biology, botany, hydraulics, mechanics, ballistics, optics, acoustics, geology, fossils

    He did not however develop any new scientific theories or laws. Instead he used the knowledge gained from his observations to improve his skills as an artist and to invent a constant stream of useful machines and devices.


"Leonardo the Inventor"

Leonardo unquestionably had one of the greatest inventive minds of all time, but very few of his designs were ever constructed at the time. The reason normally given is that the technology didn't exist during his lifetime. With his skilled draughtsmanship, Leonardo's designs looked great on paper but in reality many of them would not actually work in practice, an essential criterion for any successful invention, and this has since been borne out by subsequent attempts to construct the devices as described in his plans. This should not however detract in any way from Leonardo's reputation as an inventor. His innovations were way ahead of their time, unique, wide ranging and based on sound engineering principles. What was missing was the science.


At least he had the benefits of Archimedes' knowledge of levers, pulleys and gears, all of which he used extensively, but that was the limit of available science.

Newton's Laws of Motion were not published until two centuries after Leonardo was working on his designs. The science of strength of materials was also unheard of until Newton's time when Hooke made some initial observations about stress and strain and there was certainly no data available to Leonardo about the engineering properties of materials such as tensile, compressive, bending and impact strength or air pressure and the densities of the air and other materials. Torricelli's studies on air pressure came about fifty years before Newton, and Bernoulli's theory of fluid flow, which describe the science behind aerodynamic lift, did not come till fifty 50 years after Newton. But, even if the science had existed, Leonardo lacked the mathematical skills to make the best of it.


So it's not surprising that Leonardo had to make a lot of assumptions. This did not so much affect the function of his mechanisms nor the operating principle on which they were based, rather it affected the scale and proportions of the components and the force or power needed to operate them. His armoured tank would have been immensely heavy and difficult to manoeuvre, and it's naval version would have sunk unless its buoyancy was improved. The wooden gears used would probably have been unable to transmit the enormous forces required to move these heavy vehicles. The repeated recoil forces on his multiple-barrelled guns may have shattered their mounts, and his flying machines were very flimsy with inadequate area of the wings as well as the level of human power needed to keep them aloft. So there was nothing fundamentally wrong with most of his designs and most of the shortcomings could have been overcome with iterative development and testing programmes to refine the designs. Unfortunately Leonardo never had that opportunity.


"Leonardo the Myths"

Leonardo was indeed a genius but his reputation has also been enhanced or distorted by uncritical praise. Speculation, rather than firm evidence, about the performance of some of the mechanisms mentioned in his notebooks and what may have been in the notebooks which have been lost, has incorrectly credited him with the invention of the telescope, mathematical calculating machines and the odometer to name just three examples.

Though he did experiment with optics and made drawings of lenses, he never mentioned in his notes, a telescope, or what he may have seen with it, so it is highly unlikely that he invented the telescope.

As for his so called calculating machine: It looked very similar to the calculator made by Pascal 150 years later but it was in fact just a counting machine since it did not have an accumulator to facilitate calculations by holding two numbers at a time in the machine as in Pascal's calculator.

Leonardo's "telescope" and "calculating machine" are examples of uninformed speculation from tantalising sketches made, without corresponding explanations, in his notes. Such speculation is based on the reasoning that, if one of his sketches or drawings "looks like" some more recent device or mechanism, then it "must be" or actually "is" an early example of such a device. Leonardo already had a well deserved reputation as a genius without this unnecessary gold plating.

Similarly regarding the odometer: The claim by some, though not by Leonardo himself, that he invented the odometer implies that he was the first to envisage the concept of an odometer. The odometer was in fact invented by Vitruvius 15 centuries earlier. Leonardo invented "an" odometer, not "the" odometer. Many inventions are simply improvements, alternatives or variations, of what went before. Without a knowledge of precedents, it is a mistake to extrapolate a specific case to a general conclusion. Leonardo's design was based on measuring the rotation of gear wheels, whereas Vitruvius' design was based on counting tokens. (Note that Vitruvius also mentions in his "Ten Books on Architecture", designs for trebuchets, water wheels and battering rams protected by mobile siege sheds or armoured vehicles which were called "tortoises".)

It is rare to find an invention which depends completely on a unique new concept and many perfectly good inventions are improvements or alternatives to prior art. This applies to some of Leonardo's inventions just as it does to the majority of inventions today. Nobody would (or should) claim that Leonardo invented the clock when his innovation was to incorporate a new mechanical movement into his own version of a clock, nor should they denigrate his actual invention.


It's a great pity that Leonardo kept his works secret and that they remained unseen for so many years after his death. How might technology have advanced if he had been willing to share his ideas, to explain them to his contemporaries and to benefit from their comments?


1492 Discovery of the New World by Christopher Columbus showed that the Earth still held vast unknowns indirectly giving impetus to the scientific revolution.


1499 The first patent for an invention was granted by King Henry VI to Flemish-born John of Utynam for a method of making stained glass, required for the windows of Eton College giving John a 20-year monopoly. The Crown thus started making specific grants of privilege to favoured manufacturers and traders, signified by Letters Patent, open letters marked with the King's Great Seal.

The system was open to corruption and in 1623 the Statute of Monopolies was enacted to curb these abuses. It was a fundamental change to patent law which took away the rights of the Crown to create trading monopolies and guaranteed the inventor the legal right of patents instead of depending on the royal prerogative. So called patent law, or more generally intellectual property law, has undergone many changes since then to encompass new concepts such as copyrights and trademarks and is still evolving as and new technologies such as software and genetics demand new rules.


1500 to 1700 The Scientific Revolution and The Age of Reason

Up to the end of the sixteenth century there had been little change in the accepted scientific wisdom inherited from the Greeks and Romans. Indeed it had even been reinforced in the thirteenth century by St. Thomas Aquinas who proclaimed the unity of Aristotelian philosophy with the teachings of the church. The credibility of new scientific ideas was judged against the ancient authority of Aristotle, Galen, Ptolemy and others whose science was based on rational thought which was considered to be superior to experimentation and empirical methods. Challenging these conventional ideas was considered to be a challenge to the church and scientific progress was hampered accordingly.

In medieval times, the great mass of the population had no access to formal education let alone scientific knowledge. Their view of science could be summed up in the words of Arthur C. Clarke, "Any sufficiently advanced technology is indistinguishable from magic".


Things began to change after 1500 when a few pioneering scientists discovered, and were able to prove, flaws in this ancient wisdom. Once this happened others began to question accepted scientific theories and devised experiments to validate their ideas. In the past, such challenges had been hampered by the lack of accurate measuring instruments which had limited the range of experiments that could be undertaken and it was only in the seventeenth century that instruments such as microscopes, telescopes, clocks with minute hands, accurate weighing equipment, thermometers and manometers started to become available. Experimenters were then able to develop new and more accurate measurement tools to run their experiments and to explore new scientific territories thus accelerating the growth of new scientific knowledge.

The printing press was the great catalyst in this process. Scientists could publish their work, thus reaching a much greater audience, but just as important, it gave others working in the field, access to the latest developments. It gave them the inspiration to explore these new scientific domains from a new perspective without having to go over ground already covered by others.

The increasing use of gunpowder also had its effect. Cannons and hand held weapons swept the aristocratic knight from the field of battle. Military advantage and power went to those with the most effective weapons and heads of state began to sponsor experimentation in order to gain that advantage.

Scientific method thus replaced rational thought as a basis for developing new scientific theories and over the next 200 years scientific theories and scientific institutions were transformed, laying the foundations on which the later Industrial Revolution depended.


Some pioneers are shown below.


  • (600 B.C.) Thales The original thinker, deprecated by Aristotle.
  • (300 B.C.) Euclid promoted the disciplines of proof, logic and deductive reasoning in mathematics.
  • (269 B.C.) Archimedes followed Euclid's disciplines and was the first to base engineering inventions on mathematical principles.
  • (1450) Johannes Gutenberg did not make any scientific breakthroughs but his printing press was one of the most important developments and essential prerequisites which made the scientific revolution possible. For the first time it became easy to record information and to disseminate knowledge making learning and scholarship available to the masses.
  • (1492) Christopher Columbus' discovery of the New World showed that the World still held vast unknowns sparking curiosity.
  • (1514) Nicolaus Copernicus challenged the accepted wisdom of Ptolemy which had reigned supreme for 1400 years, that the Earth was the centre of the Universe, and proposed instead that the Universe was centred on the Sun.
  • (1543) Andreas Vesalius showed that conventional theories about human anatomy, unquestioned since they were developed over 1300 years earlier by Galen, were incorrect.
  • (1576) Tycho Brahe made detailed astronomical measurements to enable predictions of planetary motion to be based on observations rather than logical deduction.
  • (1600) William Gilbert an early advocate of scientific method rather than rational thought.
  • (1605) Francis Bacon like Gilbert, a proponent of scientific method.
  • (1608) Hans Lippershey invented the telescope, thus providing the tools for much more accurate observations, and deeper understanding of the cosmos.
  • (1609) Johannes Kepler developed mathematical relationships, based on Brahe's measurements which enabled planetary movements to be predicted.
  • (1610) Galileo Galilei demonstrated that the Earth was not the centre of the Universe and in so doing, brought himself into serious conflict with the church.
  • (1628) William Harvey outlined the true function of the heart correcting misconceptions about the functions and flow of blood as well as classical myths about its purpose.
  • (1642) Pascal together with Fermat(1653) described chance and probability in mathematical terms, rather than fate or the will of the Gods.
  • (1643) Evangelista Torricelli's invention of the barometer led to an understanding of the properties of air.
  • (1644) René Descartes challenged Aristotle's logic based on rational thinking with his own mathematical logic and attempted to describe the whole universe in mathematical terms. He was still not convinced of the value of experimental method.
  • (1656) Christiaan Huygens invented the pendulum clock enabling scientific experiments to be supported by accurate time measurements for the first time.
  • (1660) The Royal Society was founded in London to encourage scientific discovery and experiment.
  • (1661) Robert Boyle introduced the concept of chemical elements based on empirical observations rather than Aristotle's logical earth, fire, water and air.
  • (1663) Otto von Guericke devised an experiment using his Magdeburg Spheres to disprove Aristotle's claim that a vacuum can not exist.
  • (1665) Robert Hooke invented the microscope which opened a window on the previously unseen microscopic world raising questions about life itself.
  • (1666) The French Académie des Sciences was founded in Paris.
  • (1668) Antonie van Leeuwenhoek expanded on Hooke's observations and established microbiology.
  • (1687) Isaac Newton derived a set of mathematical laws which provided the basis of a comprehensive understanding of the physical world.
  • (1700) The German Academy of Sciences was founded in Berlin.

The Age of Reason marked the triumph of evidence over dogma. Or did it? There remained one great mystery yet to be unravelled but it was another 200 years before it came up for serious consideration: The Origin of Species.


1514 Polish polymath and Catholic cleric, Nicolaus Copernicus mathematician, economist, physician, linguist, jurist, and accomplished statesman with astronomy as a hobby published and circulated to a small circle of friends, a preliminary draft manuscript in which he described his revolutionary idea of the heliocentric universe in which celestial bodies moved in circular motions around the Sun, challenging the notion of the geocentric universe. Such heresies were unthinkable at the time. They not only contradicted conventional wisdom that the World was the centre of the universe but worse still they undermined the story of creation, one of the fundamental beliefs of the Christian religion. Dangerous stuff!

It was not until around 1532 that Copernicus completed the work which he called De Revolutionibus Orbium Coelestium "On the Revolutions of the Heavenly Spheres" but he still declined to publish it. Historians do not agree on whether this was because Copernicus was unsure that his observations and his calculations would be sufficiently robust enough to challenge Ptolemy's Almagest which had survived almost 1400 years of scrutiny or whether he feared the wrath of the church. Copernicus' model however was simpler than Ptolemy's geocentric model and matched more closely the observed motions of the planets. He eventually agreed to publish the work at the end of his life and the first printed copy was reportedly delivered to him on his deathbed, at the age of seventy, in 1543.

As it turned out, "De Revolutionibus Orbium Coelestium" was put on the Catholic church's index of prohibited books in 1616, as a result of Galileo's support for its revolutionary theory, and remained there until 1835.


One of the most important books ever written, De Revolutionibus' ideas ignited the Scientific Revolution (See above), but only about 300 or 400 were printed and it became known (recently) as "the book that nobody read".


1533 Frisian (now Netherlands) mathematician and cartographer Gemma Frisius proposed the idea of triangulation for surveying and producing maps. Because it was often inconvenient or difficult to measure large distances directly, he described how the distance to a distant target location could be determined locally, without actually going there, by using only angle measurements. By forming triangles to the target from reference points on a local baseline, and measuring the angles between the baseline and the lines between the reference points and the target at the vertex of the triangle, the distance to the target could be calculated using simple trigonometry. It was thus easier to survey the countryside and construct maps by dividing the area into triangles rather than squares. This method was first used in 600 B.C. by Greek philosopher Thales but was not yet commonly adopted. Triangulation is still used today in applications from surveying to celestial navigation.


In 1553 Frisius was also the first to describe how longitude could be determined by comparing local solar time with the time at some reference location provided by an accurate clock but no such clocks were available at the time.


1543 Belgian physician and professor at the University of Padua, Andries van Wesel, more commonly known as Vesalius published De Humani Corporis Fabrica (On the Structure of the Human Body), one of the most influential books on human anatomy. He carried out his research on the corpses of executed criminals and discovered that the research and conclusions published by the previous, undisputed authority on this subject, Galen, could not possibly have been based on an actual human body. Versalius was one of the first to rely on direct observations and scientific method rather than rational logic as practiced by the ancient philosophers and in so doing overturned 1300 years of conventional wisdom. Such challenges to long held theories marked the start of the Scientific Revolution.


1551 Damascus born Muslim polymath, Taqi al-Din, working in Egypt, described an impulse turbine used to drive a rotating spit over a fire. It was simply a jet of steam impinging on the blades of a paddle wheel mounted on the end of the spit. Like Hero's reaction turbine it was not developed at the time for use in more useful applications.

See more about Impulse Turbines.

See more about Steam Engines.


1576 Danish astronomer and alchemist, Tycho Brahe, built an observatory where, with his assistant Johannes Kepler, he gathered data with the aim of constructing a set of tables for calculating the position of the planets for any date in the past or in the future. He lived before the invention of the telescope and his measurements were made with a cross staff, a simple mechanical device similar to a protractor used for measuring angles. Nevertheless, despite his primitive instruments, he set new standards for precise and objective measurements but he still relied on empirical observations rather than mathematics for his predictions.


Brahe accepted Copernicus' heliocentric model for the orbits of planets which explained the apparent anomalies in their orbits exhibited by Ptolemy's geocentric model, however he still clung on to the Ptolemaic model for the orbits of the Sun and Moon revolving around the Earth as this fitted nicely with the notion of Heaven and Earth and did not cause any conflicts with religious beliefs.

However, using the data gathered together with Brahe, Kepler was able to confirm the heliocentric model for the orbits of planets, including the Earth, and to derive mathematical laws for their movements.


See also the Scientific Revolution


A wealthy, hot-headed and extroverted nobleman, said to own one percent of the entire wealth of Denmark, Brahe had a lust for life and food. He wore a gold prosthesis in place of his nose which it was claimed had been cut off by his cousin in a duel over who was the better mathematician.


In 1601, Brahe died in great pain in mysterious circumstances, eleven days after becoming ill during a banquet. Until recently the accepted explanation of the cause of death, provided by Kepler, was that it was an infection arising from a strained bladder, or from rupture of the bladder, resulting from staying too long at the dining table.

By examining Brahe's remains in 1993, Danish toxicologist Bent Kaempe determined that Brahe had died from acute Mercury poisoning which would have exhibited similar symptoms. Among the many suspects, in 2004 the finger was firmly pointed by writers Joshua and Anne-Lee Gilder, at Kepler, the frail, introverted son of a poor German family.

Kepler had the motive, he was consumed by jealousy of Brahe and he wanted his data which could make him famous but it had been denied to him. He also had the means and the opportunity. After Tycho's death when his family were distracted by grief, Kepler simply walked away with the priceless observations which belonged to Tycho's heirs.


With only a few tantalising facts to go on, historians attempt to construct a more complete picture of what happened in the distant past. In Brahe's case there could be another explanation of his demise. From the available facts it could be concluded the Brahe's death was due to an accidental overdose of Mercury, which at the time was the conventional medication prescribed for the treatment for syphilis, or from syphilis itself. This is corroborated by the fact that one of the symptoms of the advanced state of the disease is the loss of the nose due to the collapse of the bridge tissue. Brahe's hedonistic lifestyle could well have made this a possibility. Kepler's actions in purloining of Brahe's data could have been a simple act of opportunism rather than the motivation for murder.


1593 The thermometer invented by Italian astronomer and physicist Galileo Galilei. It has been variously called an air thermometer or a water thermometer but it was called a thermoscope at the time. His "thermometer" consisted of a glass bulb at the end of a long glass tube held vertically with the open end immersed in a vessel of water. As the temperature changed the water would rise or fall in the tube due to the contraction or expansion of the air. It was sensitive to air pressure and could only be used to indicate temperature changes since it had no scale. In 1612 Italian Santorio Santorio added a scale to the apparatus creating the first true thermometer and for the first time, temperatures could be quantified.


There is no evidence that the decorative, so called, Galileo thermometers based on the Archimedes principle were invented by Galileo or that he ever saw one. They are comprised of several sealed glass floats in a sealed liquid filled glass cylinder. The density of the liquid varies with the temperature and the floats are designed with different densities so as to float or sink at different temperatures. There were however thriving glass blowing and thermometer crafts based in Florence (Tuscany) where the Academia del Cimento, which was noted for its instrument making, produced many of these thermometers also known as Florentine thermometers or Infingardi (Lazy-Ones) or Termometros Lentos (Slow) because of the slowness of the motion of the small floating spheres in the alcohol of the vial. It is quite likely that these designs were the work of the Grand Duke of Tuscany Ferdinand II who had a special interest in thermometers and meteorology.


1595 Swiss clockmaker Jost Burgi invented the gravity remontoire - constant force escapement which improved the accuracy of timekeeping mechanisms by over an order of magnitude.

See more about the remontoire


1600 William Gilbert of Colchester, physician to Queen Elizabeth I of England published "De Magnete" (On the Magnet) the first ever work of experimental physics. In it he distinguished for the first time static electric forces from magnetic forces. He discovered that the Earth is a giant magnet just like one of the stones of Peregrinus, explaining how compasses work. He is credited with coining the word "electric" which comes from the Greek word "elektron" meaning amber.


Many wondrous powers have been ascribed to magnets and to this day magnetic bracelets are believed by some to have therapeutic benefits. In Gilbert's time it was believed that an adulteress could be identified by placing a magnet under her pillow. This would cause her to scream or be thrown out of bed as she slept.

Gilbert proved amongst other things that the smell of garlic did not affect a ship's compass. It is not known whether he experimented with adulteresses in his bed.


Gilbert was the English champion of the experimental method of scientific discovery considered inferior to rational thought by the Greek philosopher Aristotle and his followers. He held the Copernican or heliocentric view, dangerous at the time, that the Sun, not the Earth was not the centre of the universe. He was a contemporary of the Italian astronomer Galileo Galilei (1564-1642) who made a principled stand in defence of the founding of physics on scientific method and precise measurements rather than on metaphysical principles and formal logic. These views brought Galileo into serious confrontation with the church and he was tried and punished for his heresies.

Experimental method rather than rational thought was the principle behind the Scientific Revolution which separated Science (theories which can be proved) from Philosophy (theories which can not be proved).


See also Bertrand Russell's definition of philosophy.


Gilbert died of Bubonic plague in 1603 leaving his books, globes, instruments and minerals to the College of Physicians but they were destroyed in 1666 in the great fire of London which mercifully also brought the plague to an end.


1601 An early method of hardening wrought iron to make hard edged tool steel and swords, known as the cementation process, was first patented by Johann Nussbaum of Magdeburg in Germany though the process was already known in Prague in 1574. It was also patented once more in England by William Ellyot and Mathias Meysey in 1614.

The method employed a solid diffusion process involving the diffusion of carbon into the wrought iron to increase its carbon content to between 0.5% and 1.5%. Wrought iron rods or bars were covered with powdered charcoal (called cement) and sealed in a long airtight stone or clay lined brick box, like a sarcophagus, and heated to 1,000°C in a furnace for between one and two weeks. The nature of the difusion process, resulted in a non-uniform Carbon content which was high near the surface of the bar, diminishing towards its centre and the bars could still contain slag inclusions from the original precursor bloom from which the wrought Iron was made. The process also caused blistering of the steel, hence the product made this way was called blister steel.


See more about Iron and Steel Making


1603 Italian shoemaker and part-time alchemist from Bologna, Vincenzo Cascariolo, searching for the "Philosopher's Stone" for turning common metals into Gold discovered phosphorescence instead. He heated a mixture of powdered coal and heavy spar (Barium sulphate) and spread it over an iron bar. It did not turn into Gold when it cooled, as expected, but he was astonished to see it glow in the dark. Though the glow faded it could be "reanimated" by exposing it to the sun and so became known as "lapis solaris" or "sun stone", a primitive method of solar energy storage in chemical form.


1605 A five digit encryption code consisting only of the letters "a" and "b" giving 32 combinations to represent the letters of the alphabet was devised by English philosopher and lawyer Francis Bacon. He called it a biliteral code. It is directly equivalent to the five bit binary Baudot code of ones and zeros used for over 100 years for transmitting data in twentieth century telegraphic communications.

More importantly Bacon, together with Gilbert, was an early champion of scientific method although it is not known whether they ever met.

Bacon criticized the notion that scientific advances should be made through rational deduction. He advocated the discovery of new knowledge through scientific experimentation. Phenomena would be observed and hypotheses made based on the observations. Tests would then be conducted to verify the hypotheses. If the tests produced reproducible results then conclusions could be made.


In his 1605 publication "The Advancement of Learning", Bacon coined the dictum "If a man will begin with certainties, he will end up with doubts; but if he will be content to begin with doubts, he shall end up in certainties".


See also the Scientific Revolution


Bacon died as a result of one of his experiments. He investigated preserving meat by stuffing a chicken with snow. The experiment was a success but Bacon died of bronchitis contracted either from the cold chicken or from the damp bed, reserved for VIP's and unused for a year, where he was sent to recover from his chill.


There are many "Baconians" who claim today that at least some of Shakespeare's plays were actually written by Bacon. One of the many arguments put forward is that only Bacon possessed the necessary wide range of knowledge and erudition displayed in Shakespeare's plays.


1608 German born spectacle lens maker Hans Lippershey working in Holland, applied for a patent for the telescope for which he envisioned military applications. The patent was not granted on the basis that "too many people already have knowledge of this invention". Nevertheless, Lippershey's patent application was the first documented evidence of such a device. Legend has it that the telescope was discovered by accident when Lippershey, or two children playing with lenses in his shop, noticed that the image of a distant church tower became much clearer when viewed through two lenses, one in front of the other. The discovery revolutionised astronomy. Up to that date the pioneering work of Copernicus, Brahe and Kepler had all been based on many thousands of painstaking observations made with the naked eye without the advantage of a telescope.


See also the Scientific Revolution


1609 On the death of Danish Imperial Mathematician Tycho Brahe in 1601, German Mathematician Johannes Kepler inherited his position along with the astronomical data that Brahe had gathered over many years of pains-taking observations. From this mass of data on planetary movements, collected without the help of a telescope, Kepler derived three Laws of Planetary Motion, the first two published as "Astronomia Nova" in 1609 and the third as "Harmonices Mundi" in 1619. These laws are:

  • The Law of Orbits: All planets move in elliptical orbits, with the Sun at one focus.
  • The Law of Areas: A line that connects a planet to the Sun sweeps out equal areas in equal times. See Diagram
  • The Law of Periods: The square of the period of any planet is proportional to the cube of the semi major axis of its orbit.

Kepler's laws were the first to enable accurate predictions of future planetary orbits and at the same time they effectively disproved the Aristotelian and Ptolemaic model of geocentric planetary motion. Further evidence was provided during the same period by Galileo (See following entry).


Kepler derived these laws empirically from the years of data gathered by Brahe, a monumental task, but he was unable to explain the underlying principles involved. The answer was eventually provided by Newton.


Recently Kepler's brilliance has been tarnished by forensic studies which suggest that he murdered Brahe in order to get his hands on his observations. (See Brahe)


See also the Scientific Revolution


1610 Italian physicist and astronomer Galileo Galilei was the first to observe the heavens through a refracting telescope. Using a telescope he had built himself, based on what he had heard about Lippershey's recent invention, he observed four moons, which had not previously been visible with the naked eye, orbiting the planet Jupiter. This was revolutionary news since it was definitive proof that the Earth was not the centre of all celestial movements in the universe, overturning the geocentric or Ptolemaic model of the universe which for more than a thousand years had been the bedrock of religious and Aristotelian scientific thought. At the same time his observations of mountains on the Earth's moon contradicted Aristotelian theory, which held that heavenly bodies were perfectly smooth spheres.

Publication of these observations in his treatise Sidereus Nuncius (Starry Messenger) gave fresh impetus to the Scientific Revolution in astronomy started by the publication of Copernicus' heliocentric theory almost 100 years before, but brought Galileo into a confrontation with the church. Charged with heresy, Galileo was made to kneel before the inquisitor and confess that the heliocentric theory was false. He was found guilty and sentenced to house arrest for the rest of his life.


In 1612, having determined that Jupiter's four brightest natural satellites, Io, Europa, Ganymede and Callisto, (also known as the Galilean Moons), made regular orbits around the planet, Galileo noted that the time at which they passed a reference position in their orbits, such as the point at which they begin to eclipse the planet, would be both regular and the same for any observer in the World. This could therefore be used as the basis for a universal timer or clock which in turn could be used to determine longitude.


Galileo carried out many investigations and experiments to determine the laws governing mechanical movement. He is famously reputed to have demonstrated that all bodies fall to Earth at the same rate, regardless of their mass by dropping different sized balls from the top of the Leaning Tower of Pisa, thus disproving Aristotle's theory that the speed of falling bodies is directly proportional to their weight but there is no evidence that Galileo actually performed this experiment. However such an experiment was also performed by Simon Stevin in 1586.

In 1971, Apollo 15 astronaut David Scott repeated Galileo's experiment on the airless Moon with a feather and a hammer demonstrating that, unhampered by any atmosphere, they both fell to the ground at the same rate.


Galileo actually attempted to measure the rate at which a body falls to Earth under the influence of gravity, but he did not have an accurate method of measuring the time since the speed of the falling body was too fast and the duration too short. He therefore determined to "dilute" the effect of gravity by rolling a ball down an inclined plane to slow it down and increase the transit time. He expected to find that the distance travelled would increase by a fixed amount for each fixed increment in time. Instead he discovered that the distance travelled is proportional to the square of the time. See more about Galileo's "Laws of Motion"


In 1602 his inquisitive mind led him to make a remarkable discovery about the motion of pendulums. While sitting in a cathedral he observed the swinging of a chandelier and using his pulse to determine the period of its swing, he was greatly surprised to find that as the movement of the pendulum slowed down, its period remained the same. His curiosity piqued he followed up with a series of experiments and determined that the only factor affecting the period of the pendulum's swing was its length. It was independent of the arc of the swing,the weight of the pendulum bob and the speed of the swing. By using pendulums of different length Galileo was able to produce timing devices which were much more accurate than his pulse.

It can't have been easy, counting and keeping a running total of pendulum swings and heart rate pulses at the same time.

About 40 years later, Christiaan Huygens developed a mathematical equation defining the period of the pendulum and went on to use the pendulum in the construction of the first accurate clocks.


See more about Oscillators and Timekeeping


1614 Scottish nobleman John Napier Baron of Merchiston, published Mirifici Logarithmorum Canonis Descriptio - Description of the Marvellous Canon (Rule) of Logarithms in which he described a new method for carrying out tedious multiplication and division by simpler addition and subtraction, together with a set of tables he had calculated for the purpose. The logarithmic tables contained 241 entries which had taken him 20 years to compute.

Napier's logarithms were not the logarithms we would recognise today. Neither were they Natural logarithms with a base of "e" as is often misquoted. Natural logarithms were invented by Euler over a century later.

Napier was aware that numbers in a geometric series could be multiplied by adding their exponents (powers) for example q2 multiplied by q3 = q5, and that division could be performed by subtracting the exponents. Simple though the idea of logarithms may be, it had not been considered before because with a simple base of 2 and exponent n, where n is a whole number, the numbers represented by 2n become very large very quickly as n increases. This meant there was no obvious way of representing the intervening numbers. The idea of fractional exponents would have, (and did eventually) solve this problem but at the end of the sixteenth century, people were just getting to grips with the notion of zero and they were not comfortable with idea of fractional powers.

To design a way of representing more numbers, while still retaining whole number exponents, Napier came up with the idea of making the base number smaller. But, if the base number was very small there would be too many numbers. Using the number 1 (unity) as a base would not work either since all the powers of 1 are equal to 1. He therefore chose (1-10-7) or 0.9999999 as the base from which he constructed his tables. Napier named his exponents logarithms from the Greek logos and arithmos roughly translated as ratio-number.


Napier's publication was an instant hit with astronomers and mathematicians. Among these was Henry Briggs, mathematics professor at Gresham College, London who travelled 350 miles to Edinburgh the following year to meet the inventor of this new mathematical tool.

He stayed a month with Napier and in discussions they considered two major improvements that they both readily accepted. Briggs suggested that the tables should be constructed from a base of 10 rather than (1-10-7) and this meant adopting fractional exponents and Napier agreed that the logarithm of 1 should be 0 (zero) rather than the logarithm of 107 being 0 as it was in his original tables. Briggs' reward was to have the job of calculating the new logarithmic tables which he eventually completed and published as Arithmetica Logarithmica in 1624. His tables contained 30,000 natural numbers to 14 places.


Meanwhile in 1617 Napier published a description of a new invention in his Rabdologiae, a "collection of rods". It was a practical method of multiplication using "numbering rods" with numbers marked off on them. Known as Napier's Bones", surprisingly they did not use his method of logarithms.(See also the following item - Gunter)

Already old and frail, Napier died the same year without seeing the final results of his work.

Briggs' logarithms are still in use today, now known as common logarithms.


Napier himself considered his greatest work to be a denunciation of the Roman Catholic Church which he published in 1593 as A Plaine Discovery of the Whole Revelation of St John.


1620 Edmund Gunter professor of astronomy at Gresham College, where Briggs was professor of mathematics, made a straight logarithmic scale engraved on a wooden rod and used it to perform multiplication and division using a set of dividers or calipers to add or subtract the logarithms. The predecessor to the slide rule. (See the following item)


1621 English mathematician and clergyman, William Oughtred, friend of Briggs and Gunter from Gresham College, put two of Gunter's scales (See previous item) side by side enabling logarithms to be added directly and invented the slide rule, the essential tool of every engineer for the next 350 years until electronic calculators were invented in the 1970s.

Oughtred also produced a circular version of the slide rule.


1628 English physician Robert Harvey published "De Motu Cordis" ("On the Motion of the Heart and Blood") in which he was the first to describe the circulation of blood and how it is pumped around the body by the heart, dispelling any remaining Aristotelian beliefs that the heart was the seat of intelligence and the brain was a cooling mechanism for the blood.


See also the Scientific Revolution


1629 Italian Jesuit priest Nicolo Cabeo published Philosophia Magnetica in which electric repulsion is identified for the first time.


1636 The first reasonably accurate measurement of the speed of sound was made by French polymath Marin Mersenne who determined it to be 450 m/s (1476 ft/s). This compares with the currently accepted velocity of 343 m/s (1,125 ft/s; 1,235 km/h; 767 mph), or a kilometre in 2.91 seconds or a mile in 4.69 seconds in dry air at 20 °C (68 °F).

(For reference, note also that the speed of light is 300,000,000 m/s compared with the speed of sound of around 343 m/s.)


Seventeenth century methods of measuring the speed of sound were usually based on observations of artillery fire and were notoriously inaccurate. Since the transit time of light over a given distance is negligible compared with the transit time of sound, by measuring the delay between seeing the powder flash from a distant cannon and hearing the explosion, the time for the sound to cover a given distance and hence the speed could be estimated. For practical measurements the distance of the artillery from the observer had to be a kilometre or more to obtain a reasonably long delay of a few seconds which could be measured by available means. Even so, the only available methods for measuring such short times were by means of a pendulum or by counting the observer's own pulse beats which were hopelessly imprecise, error prone and dependant on operator reaction times.

Furthermore, because the effects of temperature, pressure, density, wind and moisture content of the air on the speed of propagation were unknown, they were not taken into account in the measurements.


Variations on the above procedure are still used today as traditional folk methods of estimating the distance to a lightning strike by counting the seconds between the flash and its following thunderclap.


Alternative set-ups, used at the time, for calculating the speed of sound involved creating a sharp noise in front of a wall or cliff and measuring the time delay before hearing its echo. The round trip distance to the wall and back divided by the time gives the speed of sound. Echo delays in practical, controlled sites are usually very short. A distance of 100 metres to the reflecting surface (200 metres round trip) results in an echo delay of only around half a second. This leads to great difficulties in measuring the time delay with the crude equipment available.


Milestones in the Understanding of Acoustics and Sound Propagation


  • (Circa 350 B.C.) Aristotle was one of the first to speculate on the transmission of sound, writing in his in his treatise "On the Soul" that "sound is a particular movement of air".

  • 1508 Leonardo Da Vinci, using a water analogy, showed in drawings that sound travels in waves like the waves on a pond.

  • 1635 Pierre Gassendi, French priest, philosopher, scientific chronicler and experimentalist and a friend of Mersenne, is reported to have measured the speed of sound as a somewhat high 478 m/s (1568 ft/s), though this experiment was not documented in his workbooks. Using the artillery method he compared the low rumbling sound from a cannon with the higher pitched sound of a musket from the same distance and concluded that the speed of sound is independent of the pitch (frequency).
  • Gassendi was an anatomist and did not believe the wave theory of sound. He believed that sound and light are carried by particles which are not affected by the surrounding medium of air or wind through which they travel. In other words, sound was a stream of atoms emitted from the sounding body and the speed of sound is the velocity of the moving atoms, and its frequency is the number of atoms emitted per second.


  • 1636 Marin Mersenne, in contrast to his friend Gassendi, held the more rational view that sound travelled in waves like the ripples on water. Using a pendulum to measure the time between the flash of exploding gunpowder and the arrival of the sound. He determined the speed of sound to be 450 m/s (1476 ft/s). As measurement techniques improved it was revised to a more accurate 316 m/s (1036 ft/s).
  • He also established that the intensity of sound, like that of light, is inversely proportional to the distance from its source and showed the speed to be independent of pitch as well as intensity (loudness).


    The same year Marsenne also published his "Harmonie Universelle" describing the acoustic behaviour of stretched strings as used in musical instruments which provided the basis for modern musical acoustics. The relationship between frequency and the tension, weight, and the length of the strings was expressed in three laws known as Mersenne's Laws as follows:

    The fundamental frequency f0 of a vibrating string (that is without harmonics) is:

    1. Inversely proportional to the length L of the string (also known as Pythagoras Law).  f0∝1/L
    2. Inversely proportional to the square root of the mass per unit length μ.                        f0∝1√/μ
    3. Proportional to the square root of the stretching force F.                                               f0∝F

    The three laws can be combined in a single exression thus:

    f0=1/2L. √(F/μ)


    Known as the "Father of Acoustics", Mersenne regularly corresponded with the leading mathematicians, astronomers and philosophers of the day, and in 1635 set up the informal, private Académie Parisienne where140 correspondents shared their research. This was the direct precursor of the French Académie des Sciences established by Colbert in 1666


  • 1660 Giovanni Alfonso Borelli and Vincenzo Viviani working at the Accademia del Cimento in Florence improved the sound timing techniques resulting in more consistent results and a value of 350 m/s (1148 ft/s) was generally accepted as the speed of sound.

  • 1660 Robert Boyle using an improved vacuum pump, showed that the sound intensity from a bell housed in a a glass chamber diminished to zero as the air was pumped out. From this he concluded that sound can not be transmitted through a vacuum and that sound is a pressure wave which requires a medium such as air to transmit the sound. See also the luminiferous aether and the transmission of light.

  • 1687 Isaac Newton in his Principia Mathematica showed that the speed of sound depended on the density and compressibility of the medium through which it travelled and could be calculated from the following relationship using air as an example.
  • V = √(P/ρ)

    Where: V is the sound velocity, P is the atmospheric pressure and ρ is the density of the air and the ratio P/ρ is a measure of its compressability.

    Newton used echoes from a wall at the end of an outdoor corridor at Trinity College, Cambridge to estimate the speed of sound to verify his calculations but the calculated value of 295 m/s (968 f/s), was consistenly around 16% less than his measured experimental values and those achieved by others at the time.

    The unexplained difference is attributed to the assumptopns made and not made. These include the following:

    • Newton used a mechanical interpretation of sound as being "pressure" pulses transmitted through adjacent fluid particles.
    • When a pulse is propagated through a fluiid, particles of the fluid move in simple harmonic motion at a constant frequency and if it is true for one particle it must be true for all adjacent particles.
    • Possible errors due to temperature, pressure, moisture content and wind, elasticity of the air and whether they were constant, proportional or non-linear were mostly unknown at the time and were consequently ignored.

  • 1740 Giovanni Lodovico Bianconi, an Italian doctor demonstrated that the speed of sound in air increases with temperature. This is because molecules at higher temperatures have more energy and vibrate more quickly and since they vibrate faster, they can transmit sound waves more quickly.

  • 1746 Jean-Baptiste le Rond d'Alembert, a French philosopher, mathematician and music theorist deduced the Wave Equation relating the velocity of a sound wave v to its frequency f and wavelength λ, based on studies of vibrating strings, as follows:
  • v = f λ

    The relationship also applies to electromagnetic waves.

     

  • 1802 Pierre-Simon Laplace and his young protégé Jean-Baptiste Biot rectified Newton's troublesome error and followed up by publishing a formal correction in 1816. They explained that in a pressure wave, when the sound wave compresses and rarefies the air in quick succession, Boyles Law does not apply because the temperature does not remain constant. Heat is liberated during compression part of the cycle, but because of the relatively high frequency of the sound wave, the heat does not have time to dissipate or be reabsorbed during the low pressure half of the cycle. This causes the local temperature to increase, in turn increasing the local pressure and raising the speed of the sound correspondingly. Thus Newton's calculations were brought into line with the experimental results.
  • In modern terms, the rapidly fluctuating compression and expansion of air through which the sound wave passes is an adiabatic process, not an isothermal process).


1642 At the age of eighteen, French mathematician and physicist, Blaise Pascal constructed a mechanical calculator capable of addition and subtraction. Known as the Pascaline, it was the forerunner of computing machines. Despite its utility, this great innovation failed to capture the imagination (or the attention) of the scientific and commercial public and only fifty were made. Thirty years later it was eclipsed by Leibniz' four function calculator which could perform multiplication and division as well as addition and subtraction.


Pascal also did pioneering work on hydraulics, resulting in the statement of Pascal's principle, that "pressure will be transmitted equally throughout a confined fluid at rest, regardless of where the pressure is applied". He explained how this principle could be used to exert very high forces in a hydraulic press. Such a system would have two cylinders with pistons with different cross-sectional areas connected to a common reservoir or simply connected by a pipe. When a force is exerted on the smaller piston, it creates a pressure in the reservoir proportional to the area of the piston. This same pressure also acts on the larger piston, but because its area is greater, the pressure is translated into a larger force on the larger piston. The difference in the two forces is proportional to the difference in area of the two pistons and the hydraulic, mechanical advantage is equal to the ratio of the areas of the two pistons. Thus the cylinders act in a similar way to a lever, as described by Archimedes, which effectively magnifies the force exerted. 150 years later Bramah was granted a patent for inventing the hydraulic press.

The unit of pressure was recently named the "Pascal" in his honour, replacing the older, more descriptive, pounds per square inch (psi) or Newtons per square metre (N/M2).


Besides hydraulics, Pascal explained the concept of a vacuum. At the time, the conventional Aristotelian view was that the space must be full with some invisible matter and a vacuum was considered an impossibility.


In 1653 Pascal described a convenient shortcut for determining the coefficients of a binomial series, now called Pascal's Triangle and the following year, in response to a request from a gambling friend, he used it to derive a method of calculating the odds of particular outcomes of games of chance. In this case, two players wishing to finish a game early, wanted to divide their remaining stakes fairly depending on their chances of winning from that point. To arrive at a solution, he corresponded with fellow mathematician Fermat and together they worked out the notion of expected values and laid the foundations of the mathematical theory of probabilities.

See Pascal's Triangle and Pascal Probability

Pascal did not claim to have invented his eponymous triangle. It was known to Persian mathematicians in the eleventh and twelfth centuries and to Chinese mathematicians in the eleventh and thirteenth centuries as well as others in Europe and was often named after local mathematicians.


For most of his life Pascal suffered from poor health and he died at the age of 39 after abandoning science and devoting most of the last ten years of his short life to religious studies culminating in the publication (posthumously) ofPensées (Thoughts), a justification of the Christian faith.


See also the Scientific Revolution


1643 Evangelista Torricelli served as Galileo's secretary and succeeded him as court mathematician to Grand Duke Ferdinand II and in 1643 made the world's first barometer for measuring atmospheric or air pressure by balancing the pressure force, due to the weight of the atmosphere, against the weight of a column of mercury. This was a major step in the understanding of the properties of air.


1644 French philosopher and mathematician René Descartes published Principia Philosophiae in which he attempts to put the whole universe on a mathematical foundation reducing the study to one of mechanics. Considered to be the first of the modern school of mathematics, he believed that Aristotle's logic was an unsatisfactory means of acquiring knowledge and that only mathematics provided the truth so that all reason must be based on mathematics.

He was still not convinced of the value of experimental method considering his own mathematical logic to be superior.

His most important work La Géométrie, published in 1637, includes his application of algebra to geometry from which we now have Cartesian geometry. He was also the first to describe the concept of momentum from which the law of conservation of momentum was derived.


See also the Scientific Revolution


Descartes accepted sponsorship by Queen Christina of Sweden who persuaded him to go to Stockholm. Her daily routine started at 5.00 a.m. whereas Descartes was used to rising at at 11 o'clock. After only a few months in the cold northern climate, walking to the palace for 5 o'clock every morning, he died of pneumonia.


1646 The word Electricity coined by English physician Robert Browne even though he contributed nothing else to the science.


1650


1651 German chemist Johann Rudolf Glauber in his "Practise on Philosophical Furnaces" describes a safety valve for use on chemical retorts. It consisted of a conical valve with a Lead cap which would lift in response to excessive pressure in the retort allowing vapour to escape and the pressure to fall. The weight of the cap would reseat the valve once the pressure returned to an acceptable level. Today, modern implementations of Glauber's valve are the basis of the pressure vents incorporated into sealed batteries to prevent rupture of the cells due to pressure build up.

In 1658 Glauber published Opera Omnia Chymica The "Complete Works of Chemistry", a description of different techniques for use in chemistry which was widely reprinted.


1654 The first sealed liquid-in-glass thermometer produced by the artisan Mariani at the Academia del Cimento in Florence for the Grand Duke of Tuscany, Ferdinand II. It used alcohol as the expanding liquid but was inaccurate in absolute terms, although his thermometers agreed with each other, and there was no standardised scale in use.


1656 Building on Galileo's discoveries, Dutch physicist and astronomer Christiaan Huygens determined that the period P of a pendulum is given by:

P = 2 π √(l/g)

Where l is the length of the pendulum and g is the acceleration due to gravity.

Huygens made the first practical pendulum clock making accurate time measurement possible for the first time. Previous mechanical clocks had pointers which indicated the progress of slowly rising water or slowly falling weights and were only accurate to large fractions of an hour. Huygens clock enabled time to be measured in seconds. It depended on gearing a mechanical indicator to the constant periodic motion of a pendulum. Falling weights drove the pointer mechanism and transferred just enough energy to the pendulum to overcome friction and air resistance so that it did not stop.

Huygens pendulum reduced the loss of time by clocks from about 15 minutes per day to about 15 seconds per day.


In 1675 Huygens published in the French Journal de Sçavans, his design for the balance spring escapement which replaced the clock's pendulum regulator, enabling the design of watches and portable timekeepers.

The pendulum clock however remained the world's most accurate time-keeper for nearly 300 years until the invention of the quartz clock in 1927.


See more about Huygens' Clocks


Huygens also made many astronomical observations noting the characteristics of Saturn's rings and the surface of Mars. He was also the first to make a reasoned estimate of the distance of the stars. He assumed that Sirius had the same brightness as the Sun and from a comparison of the light intensity received here on Earth he calculated the distance to Sirius to be 2.5 trillion miles. It is actually about 20 times further away than this. There was however nothing wrong with Huygens' calculations. It was the assumption which was incorrect. Sirius is actually much brighter than the Sun, but he had no way of knowing that. Had he know the true brightness of Sirius, his estimation would have been much closer to the currently accepted value.


1658 Irish Archbishop James Ussher, following a literal interpretation of the bible, calculated that the Earth was created on the evening of 22 October 4004 B.C..


1660 English mathematician and astronomer, Richard Towneley together with his friend, physician Henry Power investigated the expansion of air at different altitudes by enclosing a fixed mass of air in a Torricelli/Huygens U-tube with its open end immersed in a dish of mercury. They noted the expansion of the enclosed air at different altitudes on a hill near their home and concluded that gas pressure, the external atmospheric pressure of the air on the mercury, was inversely proportional to the volume. They communicated their findings to Robert Boyle a distinguished contemporary chemist who verified the results and published them two years later as Boyle's Law. Boyle referred to Towneley's conclusions as "Towneley's Hypothesis".


See also Towneley's improvements to the pendulum clock timekeeping mechanism. Another of his ideas for which others appear to have got the credit.


1660 The Royal Society founded in London as a "College for the Promoting of Physico-Mathematical Experimental Learning", which met weekly to discuss science and run experiments. Original members included chemist Robert Boyle and architect Christopher Wren.


See also the Scientific Revolution


1661 Huygens invents the U tube manometer, a modification of Torricelli's barometer, for determining gas pressure differences. In a typical "U Tube" manometer the difference in pressure (really a difference in force) between the ends of the tube is balanced against the weight of a column of liquid. The gauges are only suitable for measuring low pressures, most gauges recording the difference between the fluid pressure and the local atmospheric pressure when one end of the tube is open to the atmosphere.


1661 Irish chemist Robert Boyle published "The Sceptical Chymist" in which he introduced the concept of elements. At the time only 12 elements had been identified. These included nine metals, Gold, Silver, Copper, Tin, Lead, Zinc, Iron, Antimony and Mercury and two non metals Carbon and Sulphur all of which had been known since antiquity as well as Bismuth which had been discovered in Germany around 1400 A. D.. Platinum had been known to South American Indians from ancient times but only became to the attention of Europeans in the eighteenth century. Boyle himself discovered phosphorus which he extracted from urine in 1680 taking the total of known elements to fourteen.

Though an alchemist himself, believing in the possibility of transmutation of metals, he was one of the first to break with the alchemist's tradition of secrecy and published the details of his experimental work including failed experiments.


See also the Scientific Revolution


1662 Boyle published Boyle's Law stating that the pressure and volume of a gas are inversely proportional.

PV=K

The first of the Gas Laws.

The relationship was originally discovered in 1660 by English mathematician Richard Towneley but attributed to Boyle. Both Towneley and Boyle were not aware that the relationship was temperature dependent and it was not until 1676 that the relationship was rediscovered by French physicist and priest, Abbé Edme Mariotte, and shown to apply only when the gas temperature is held constant. The law is known as Mariotte's Law in non-English speaking countries.


See also Boyle on Sound Transmission


1663 Otto von Guericke the Burgomaster of Magdeburg in Germany invented the first electric generator, which produced static electricity by rubbing a pad against a large rotating sulphur ball which was turned by a hand crank. It was essentially a mechanised version of Thales demonstrations of electrostatics using amber in 600 B.C. and the first machine to produce an electric spark. Von Guericke had no idea what the sparks were and their production by the machine was regarded at the time as magic or a clever trick. The device enabled experiments with electricity to be carried out but since it was not until 1729 that the possibility of electric conduction was discovered by Gray, the charged sulphur ball had to be moved to the place where the electric experiment took place. Von Guericke's generator remained the standard way of producing electricity for over a century.


Von Guericke was famed more for his studies of the properties of a vacuum and for his design of the Magdeburg Hemispheres experiment. In 1650, in a challenge to Aristotle's theory that a vacuum can not exist, like many of Aristotle's theories, accepted uncritically by philosophers as conventional wisdom for centuries and encapsulated in the saying "Nature abhors a vacuum", von Guericke set about disproving this theory by experimental means. In 1650 he designed a piston based air pump with which he could evacuate the air from a chamber and he used it to create a vacuum in experiments which showed that sound of a bell in a vacuum can not be heard, nor can a vacuum support a candle flame or animal life. To demonstrate the strength of a vacuum, in 1654 he constructed two hollow copper hemispheres which fitted together along a greased flange forming a hollow sphere. When the air was evacuated from the sphere, the external air pressure held the hemispheres together and two teams of horses could not pull them apart, yet when air was released into the sphere the hemispheres simply fell apart.

(See Magdeburg Hemispheres picture).


See also the Scientific Revolution


1665 Boyle published a description of a hydrometer for measuring the density of liquids which was essentially the same as those still in use today for measuring the specific gravity (S.G.) of the electrolyte in Lead Acid batteries. Hydrometers consist of a sealed capsule of Lead or Mercury inside a glass tube into which the liquid being measured is placed. The height at which the capsule floats represents the density of the liquid.

The hydrometer is however considered to be the invention of Greek mathematician Hypatia.


1665 The Journal des Sçavans (later renamed Journal des Savants), the earliest academic journal to be published in Europe was established. Its content included obituaries of famous men, church history, and legal reports. It was followed two months later by the first appearance of the Philosophical Transactions of the Royal Society.


1665 English polymath, Robert Hooke published Micrographia in which he illustrated a series of very small insects and plant specimens he had observed through a microscope he had constructed himself for the purpose. It included a description of the eye of a fly and tiny sections of plant materials for which he coined the term "cells" because their distinctive walls reminded him of monk's or prison quarters. The publication also included the first description of an optical microscope, and it is claimed, was the inspiration to Antonie van Leeuwenhoek who is often credited himself with the invention of the microscope. Hooke's publication was the first major publication of the recently founded Royal Society and was the first scientific best-seller, inspiring a wide public interest in the new science of microscopy.


See also the Scientific Revolution


1666 The French Académie des Sciences was founded in Paris by King Louis XIV at the instigation of Jean-Baptiste Colbert the French Minister of Finances, as a government organisation with the aim of encouraging and protecting French scientific research. Colbert's dirigiste economic policies were protectionist in nature and involved the government in regulating French trade and industry, echoes of which remain to this day.


1668 Dutch draper, haberdasher and scientist, Antonie Phillips van Leeuwenhoek, possibly inspired by Hooke's Micrographia (see above) made his first microscope. Known as the "Father of Microbiology" he subsequently produced over 450 high quality lenses and 247 microscopes which he used to investigate biological specimens. He was the first to observe and describe single-celled organisms or microbes which he called animacules and was also the first to observe and record muscle fibers, bacteria, spermatozoa, and blood flow in capillaries. Van Leeuwenhoek kept the British Royal Society informed of the results of his extensive investigations and eventually became a member himself.


1668 Scottish mathematician and astronomer James Gregory published Geometriae Pars Universalis (The Universal Part of Geometry) in which he proved the fundamental theorem of calculus, that the two operations of differentiation and integration were the inverses of eachother. A system of infinitesimals, which we would now call integration had been used by Archimedes circa 260 B.C to calculate areas. Later, the concepts of rate and continuity had been studied by Oxford and other scholars since the fourteenth century. But before Gregory, nobody had connected geometry, and the calculation of areas, to motion, and the calculation of velocity.

A more general proof of the relationship between integrals and differentials was developed by English mathematician and theologian Isaac Barrow. It was published posthumously in 1683, by fellow mathematician John Collins, in the Lectiones Mathematicae which summarised Barrow's work, carried out between 1664 and 1677, on the relationships between the estimation of tangents and areas (called quadratures at the time) which mirrored the procedures used in differential and integral calculus.

In 1663 at the age of 23 Barrow was selected as the first Lucasian professor at Cambridge. In 1669 he resigned his position to study divinity for the rest of his life. The Lucasian Chair and the baton for developing the calculus were passed to his student Isaac Newton who was already developing his own ideas on its practical applications around the same time, twenty years before the publication of his Principia Mathematica.


Meanwhile Gregory was one of the first to investigate the properties of transcendental functions and their application to trigonometry and logarithms. A transcendental function "transcends" algebra in that it cannot be expressed in terms of a finite sequence of the algebraic operations of addition, multiplication, and root extraction. Transcendental numbers are not rational, algebraic numbers which can be expressed as integers or ratios of integers. They are the sum of an infinite series. Examples of transcendental functions include the exponential function, the logarithm, and the trigonometric functions. Transcendental numbers include π and the exponential e (Euler's number)

Gregory developed a method of calculating transcendental numbers by a process of successive differentiation to produce an infinite power series which converges towards the result but he was unable to prove conclusively that π and e were transcendental. The proof was confirmed many years later after his untimely death at the age of only 36.

English mathematician Brook Taylor applied Gregory's theory to various trigonometric and logarithmic functions to produce corresponding series which he published in his book "Methodus incrementorum directa et inversa" in 1715. These series became known as Taylor expansions. Scottish mathematician Colin Maclaurin subsequently developed a modified version or special case of the Taylor expansion, simplifying it by centring it on zero which became known as the Maclaurin expansion.


Taylor and Maclaurin expansions are used extensively today in modern computer systems to provide mathematical approximations for trigonometric, logarithmic and other transcendental functions. See examples.


1675 Boyle discovered that electric force could be transmitted through a vacuum and observed attraction and repulsion.


1676 Prolific English engineer, surveyor, architect, physicist, inventor, socialite and self publicist, Robert Hooke, considered by some to be England's Leonardo (there were others - see Cayley), is now mostly remembered for for Hooke's Law for springs which states that the extension of a spring is proportional to the force applied, or as he wrote it in Latin "Ut tensio, sic vis" ("as is the extension, so is the force"). From this the energy stored in the spring can be calculated by integrating the force times the displacement over the extension of the spring. The force per unit extension is known as the spring constant. Hooke actually discovered his law in 1660, but afraid that he would be scooped by his rival Newton, he published his preliminary ideas as an anagram "ceiiinosssttuv" in order to register his claim for priority. It was not until 1676 that he revealed the law itself. The forerunner of digital time stamping?


In 1657 Hooke was the first to propose using a spring rather than gravity to stimulate the oscillator in clock timekeeping regulators, eliminating the pendulum and enabling much smaller, portable clocks and watches. He envisaged the back and forth bending of a straight flat spring to provide the necessary force, but it was Huygens however who later made the first practical clocks based on this method.

The following year, Hooke invented the Anchor Escapement the essential timekeeping mechanism used in long case (granfather) pendulum clocks for over 200 years until it was gradually replaced by the more accurate deadbeat escapement.

See more about Hooke's clock mechanisms.


Hooke was surveyor of the City of London and assistant to Christopher Wren in rebuilding the city after the great fire of London in 1666. He made valuable contributions to optics, microscopy, astronomy, the design of clocks, the theories of springs and gases, the classification of fossils, meteorology, navigation, music, mechanical theory and inventions, but despite his many achievements he was overshadowed by his contemporary Newton with whom he was unfortunately, constantly in dispute. Hooke claimed a role in some of Newton's discoveries but he was never able to back up his theories with mathematical proofs. Apparently there was at least one subject which he had not mastered.


1673 Between the years 1673 and 1686, German mathematician, diplomat and philosopher, Gottfried Wilhelm Leibniz, developed his theories of mathematical calculus publishing the first account of differential calculus in 1684 followed by the explanation of integral calculus in 1686. Unknown to him these techniques were also being developed independently by Newton. Newton got there first but Leibniz published first and arguments about priority raged for many years afterwards. Leibniz's notation has been adopted in preference to Newton's but the concepts are the same.

He also introduced the words function, variable, constant, parameter and coordinates to explain his techniques.


Leibniz was a polymath and another candidate for the title "The last man to know everything". As a child he learned Latin at the age of 8, Greek at 14 and in the same year he entered the University of Leipzig where he earned a Bachelors degree in philosophy at the age of 16, a Bachelors degree in law at 17 and Masters degrees in both philosophy and law at the age of 20. At 21 he obtained a Doctorate in law at Altdorf. In 1672 when he was 26, his diplomatic travels took him to Paris where he met Christiaan Huygens who introduced him to the mathematics of the pendulum and inspired him to study mathematics more seriously.


In 1679 Leibniz proposed the concept of binary arithmetic in a letter written to French mathematician and Jesuit missionary to China, Joachim Bouvet, showing that any number may be expressed by 0's and 1's only. Now the basis of digital logic and signal processing used in computers and communications.

Surprisingly Leibniz also suggested that God may be represented by unity, and "nothing" by zero, and that God created everything from nothing. He was convinced that the logic of Christianity would help to convert the Chinese to the Christian faith. He believed that he had found an historical precedent for this view in the 64 hexagrams of the Chinese I Ching or the Book of Changes attributed to China's first shaman-king Fuxi (Fu Hsi) dating from around 2800 B.C. and first written down as the now lost manual Zhou Yi in 900 B.C.. A hexagram consists of blocks of six solid or broken lines (or stalks of the Yarrow plant) forming a total of 64 possibilities. The solid lines represent the bright, positive, strong, masculine Yang with active power while the broken or divided lines represent the dark, negative, weak, feminine Yin with passive power. According to the I Ching, the two energies or polarities of the Yin and Yang are both opposing and complementary to each other and represent all things in the universe which is a progression of contradicting dualities.

Although the I Ching had more to do with fortune telling than with mathematics, there were other precedents to Leibniz's work. The first known description of a binary numeral system was made by Indian mathematician Pingala variously dated between the 5th century B.C. or the 2nd century B.C..


In 1671 Leibniz invented a 4 function mechanical calculator which could perform addition, subtraction, multiplication and division on decimal numbers which he demonstrated to the Royal Society in London in 1673 but they were not impressed by his crude prototype machine. (Pascal's 1642 calculator could only perform addition and subtraction.) It was not until 1676 that Leibniz eventually perfected it. His machine used a stepped cylinder to bring into mesh different gear wheels corresponding to the position of units, tens, hundreds etc. to operate on the particular digit as required. Strangely, as the inventor of binary arithmetic, he did not use it in his calculator.


His most famous philosophical proposition was that God created "the best of all possible worlds".


1681 French physicist and inventor Denis Papin invented the pressure release valve or safety valve to prevent explosions in pressure vessels. Although Papin is credited with the invention, safety valves had in fact been described by Glauber thirty years earlier, however Papin's valve was adjustable for different pressures by means of moving the lead weight along a lever which kept the valve shut. Papin's safety valve became a standard feature on steam engines saving many lives from explosions

The invention of the safety valve came as a result of his work with pressurised steam. In 1679 he had invented the pressure cooker which he called the steam digester.


Observing that the steam tended to lift the lid of his cooker in 1690 Papin also conceived the idea of using the pressure of steam to do useful work. He introduced a small amount of water into a cylinder closed by a piston. On heating the water to produce steam, the pressure of the steam would force the piston up. Cooling the cylinder again caused the steam to condense creating a vacuum under the piston which would pull it down (In fact the atmospheric pressure would push the piston down). This pumping action by a piston in a cylinder was the genesis of the reciprocating steam engine. Papin envisaged two applications for his piston engine. One was a toothed rack attached to the piston whose movement turned a gear wheel to produce rotary motion. The other was to use the reciprocating movements of the piston to move oars or paddles in a steam powered boat. Unfortunately he was unable to attract sponsors to enable him to develop these ideas. Papin was not the first to use a piston, von Guericke came before him, but he was the first to use it to capture the power of steam to do work.


In 1707, with the collaboration of Gottfried Leibniz (still smarting over his dispute with Isaac Newton), Papin published "The New Art of Pumping Water by Using Steam". The Papin / Leibniz pump had many similarities to Savery's 1698 water pump and their claims resulted in a protracted dispute involving the British Royal Society as to the true inventor of the steam driven water pump. Savery's pump did not use a piston but used a vacuum to draw water from below the pump and steam pressure to discharge it at a higher level. Papin's pump on the other hand used only steam pressure and could not draw water from a lower level. (See diagram of Papin's Steam Engine)

Unlike Savery's pump, Papin's pump used a closed cylinder, adjacent to (or even partially immersed in) the lower pool, fed with water from the pool through a non-return valve at the bottom of the cylinder. In the cylinder a free piston rested on the surface of the water which, at it's highest point, was level with the water in the pool. Steam from a separate boiler introduced above the piston forced it downwards displacing the water in the cylinder through another non-return valve at the bottom of the cylinder and upwards to the discharge level. Simply by exhausting the steam from the cylinder through a tap, the external water pressure would cause the cylinder to refill with water through the non-return valve at the base of the cylinder elevating the piston once more to the level of the surrounding water pool. Cooling was unnecessary since the design did not depend on creating a vacuum in the cylinder.

Papin also suggested a way of using his pump to create rotary motion. He proposed to feed the water raised by the pump over a waterwheel returning it to a lower reservoir in a closed loop system.


Like many gifted inventors Papin died destitute.


See more about Steam Engines.


1687 "Philosophiae Naturalis Principia Mathematica" - Mathematical Principles of Natural Philosophy published by English physicist and mathematician Isaac Newton. One of the most important and influential books ever published, it was written in Latin and not translated into English until 1729.


By coincidence Newton was born in 1642, the year that Galileo died.

He made significant advances in the study of Optics demonstrating in 1672 that white light is made up from the spectrum of colours observed in the rainbow. He used a prism to separate white light into its constituent colour spectrum and by means of a second prism he showed that the colours could be recombined into white light.

In 1668 he designed and made the first known relecting telescope, based on a concave primary mirror and a flat secondary mirror.


He is perhaps best remembered however for his Mechanics, the Laws of Motion and Gravitation which his "Principia" contains.

Newton's Laws of Motion can be summarised as follows:

  • First Law: - Any object will remain at rest or in uniform motion in a straight line unless compelled to change by some external force.
  • Second Law: - The acceleration a of a body is directly proportional to, and in the same direction as, the net force F acting on it, and inversely proportional to its mass m. Thus, F = ma.
  • Third law: - To every action there is an equal and opposite reaction.

70 years earlier, Galileo came very close to developing these relationships but he had neither the mathematical tools nor the instruments to make precise measurements to prove his theories. Newton's first law is a restatement of Galileo's concept of inertia or resistance to change which he measured by its mass. See a Comparison of Galileo's and Newton's "Laws of Motion"


Newton also developed the Law of Universal Gravitation which states that any two bodies in the universe attract each other with a force that is directly proportional to the product of their masses and inversely proportional to the square of the distance between them. Thus:

F = G m1m2 / r2

Where:

F is force between the bodies

G is the Universal Gravitational Constant

m1 and m2 are the masses of the two bodies

r is the distance between the centres of the bodies


Newton was thus able to calculate or predict gravitational forces using the concept of action at a distance. He was also able to explain that the motion of tides was due to the varying effect on the oceans caused by the Earth's daily rotation as the distance between the Moon and the oceans changed as the oceans rotated through the constant gravitational field between the Earth and the Moon.

He did not discover gravity however, nor could he explain it. Galileo was well aware of the effects of gravity, and so was Huygens, a contemporary of Newton, who believed Descartes' earlier theory that gravity could be explained in mechanical terms as a high speed vortex in the aether which caused tiny particles to be thrown outwards by the centrifugal force of the vortex while heavier particles fell inwards due to balancing centripetal forces. Huygens never accepted Newton's inverse square law of gravity.

Newton's concept that planetary motion was due to gravity was completely new. Before that, the motion of heavenly bodies had been explained by Gilbert as well as his contemporary the German astronomer Kepler (1571-1630), and others as being due to magnetic forces.

Even now in the twenty first century, will still do not have a satisfactory explanation of the nature of gravitational forces.


Newton was the giant of the Scientific Revolution. He assimilated the advances made before him in mathematics, astronomy, and physics to derive a comprehensive understanding of the physical world. The impact of the publication of Newton's laws of dynamics on the scientific community was both profound and wide ranging. The laws and Newton's methods provided the basis on which other theories, such as acoustics, fluid dynamics, kinetic energy and work done were built as well as down to earth technical knowledge which enabled the building of the machines to power the Industrial Revolution and, at the other end of the spectrum, they explained the workings of the Universe.


However, of equal or even greater importance was the fact that Newton showed for the first time, the general principle that natural phenomena, events and time varying processes, not just mechanical motions, obey laws that can be represented by mathematical equations enabling analysis and predictions to be made. The laws of nature represented by the laws of mathematics, the foundation of modern science. The 3 volume publication was thus a major turning point in the development of scientific thought, sweeping away superstition and so called "rational deduction" as ways of explaining the wonders of nature.

Newton's reasoning was supported by his invention of the mathematical techniques of Differential and Integral Calculus and Differential Equations, actually developed in 1665 and 1666, twenty years before he wrote the "Principia" but not used in the proofs it contains. These were major advances in scientific knowledge and capability which extended the range of existing mathematical tools available for characterising nature and for carrying out scientific analysis.

See also Gregory's earlier contribution to calculus theory.


Newton engaged in a prolonged feud with Robert Hooke who claimed priority on some of Newton's ideas. Newton's oft repeated quotation "If I have seen further, it is by standing on the shoulders of giants." was actually written in a sarcastic letter to Hooke, who was almost short enough to be classified as a dwarf, with the implication that Hooke didn't qualify as one of the giants.


Leibniz working contemporaneously with Newton also developed techniques of differential and integral calculus and a dispute developed with Newton as to who was the true originator. Newton's discovery was made first, but Leibniz published his work before Newton. However there is no doubt that both men came to the ideas independently. Newton developed his concept through a study of tangents to a curve and also considered variables changing with time, while Leibniz arrived at his conclusions from calculations of the areas under curves and thought of variables x and y as ranging over sequences of infinitely close values.


Newton is revered as the founder of modern physical science, but despite the great fame he achieved in his lifetime, he remained a modest, diffident, private and religious man of simple tastes. He never married, devoting his life to science.


Newton didn't always have his head in the clouds. In his spare time, when he wasn't dodging apples, he invented the cat-flap.


1698 Searching for a method of replacing the manual or animal labour for pumping out the seeping water which gathered at the bottom of coal mines, English army officer Thomas Savery designed a mechanical, or more correctly, a hydraulic water pump powered by steam. He called the process "Raising Water by Fire". Savery was impressed by the great power of atmospheric pressure working against a vacuum as demonstrated by von Guericke's Magdeburg Hemispheres experiment. He realised that a vacuum could be produced by condensing steam in a sealed chamber and he used this principle as the basis for the first practical steam driven water pump which became known as "The Miner's Friend". Savery's pump did not produce any mechanical motion but used atmospheric pressure to force the water up a vertical pipe from a well or pond below, to fill the vacuum in the steam chamber above, and steam pressure to drive the water in the steam chamber up a vertical discharge pipe to a level above the steam chamber.


(See diagram of Savery's Steam Engine)


The essential components of the pump were a boiler producing steam, a steam chamber at the heart of the system and suction and discharge water pipes each containing a non-return flap valve he called a clack.


Starting with some water in the steam chamber, the steam valve from the boiler is opened introducing steam into the steam chamber where the pressure of the steam forces the water out through a non-return flap valve into the discharge pipe. The head of water in the discharge pipe keeps the flap valve closed so the water can not return into the steam chamber. The steam supply to the chamber is then turned off and the chamber is cooled from the outside with cold water which causes the steam in the chamber to condense creating a vacuum in the chamber. The vacuum in turn causes water to be sucked up from the well or lower pond through another flap valve in the induction pipe into the steam chamber. The head of water in the steam chamber keeps the flap valve closed so that the water can not flow back to the well. Once the chamber is full, steam is fed once more into the chamber and the cycle starts again.


Efficiency was improved by using two parallel steam chambers alternately such that one of the chambers was charged with steam while the other chamber was cooled. The theoretical maximum depth from which Savery's engine can draw water is limited by the atmospheric pressure which can support a head of 32 feet (10 M) but because of leaks the practical limit is about 25 feet. In a mine this would require the engine to be below ground close to the water level, but as we know, fire and coal mines don't mix. On the discharge side the maximum height to which the water can be raised is limited by the available steam pressure and also by the safety of the pressure vessels whose solder joints are particularly vulnerable, a serious drawback with the available 17th century technology.


See more about Steam Engines.


1700 At the instigation of Leibniz, King Frederick I of Prussia founded the German Academy of Sciences in Berlin to rival Britain's Royal Society and the French Académie des Sciences. Leibniz was appointed as its first president


1701 English gentleman farmer Jethro Tull, developed the seed drill, a horse-drawn sowing device which mechanised the planting of seeds, precisely positioning them in the soil and then covering them over. It thus enabled better control of the distribution and positioning of the seeds leading to improvements of up to nine times in crop yields per acre (or hectare). For the farm hand, the seed drill cut out some of the back-breaking work previously employed in the task but the downside was that it also reduced the number of farm workers needed to plant the crop. The seed drill was a relatively simple device which could be made by local carpenters and blacksmiths. Its combined benefits of higher crop yields and productivity improvements were the first steps in mechanised farming which revolutionised British agriculture.

The design concept was not new since similar devices had been used in Europe in the middle ages. Single tube seed drills were also known to have been used in Sumeria in Mesopotamia, now (modern day Iraq) during the Late Bronze Age (1500 B.C.) and multi-tube drills were used in China during the Qin Dynasty.


The introduction of Tull's improved seed drill was an early example of the mechanisation of manual labour tasks which ushered in the Industrial Revolution in Britain.


1705 Head of demonstrations at the Royal Society in London, English physicist and instrument maker appointed by Isaac Newton, Francis Hauksbee the Elder demonstrated an electroluminescent glow discharge lamp which gave off enough light to read by. It was based on von Guericke's electric generator with an evacuated glass globe, containing mercury, replacing the sulphur ball. It produced a glow when he rubbed the spinning globe with his bare hands. The blue light it produced seemed to be alive and was considered at the time to be the work of God. Like von Guericke, Hauksbee never realised the potential of electricity. Instead, electric phenomena were for many years the tool of conjurors and magicians who entertained people at parties with mild electric shocks, producing sparks or miraculously picking up feathers.


1709 Abraham Darby, from a Quaker family in Bristol established an iron making business at Coalbrookdale in Shropshire introducing new production methods which revolutionised Iron making. He already had a successful brass ware business in Bristol employing casting and metal forming technologies he had learned in the Netherlands and in 1708 he had patented the use of sand casting which he realised was suitable for the mass production of cheaper Iron pots for which there was a ready market. The purpose of his move to Coalbrookdale which already had a long established Iron making industry was to apply these technologies and his metallurgical knowledge to the Iron making business to produce cast Iron kettles, cooking pots, cauldrons, fire grates and other domestic ironware with intricate shapes and designs.

Early blast furnaces used charcoal as the source of the Carbon reducing agent in the Iron smelting process, but Darby investigated a the use of different fuels to reduce costs. This was partially out of necessity since the surrounding countryside had been denuded of trees to produce charcoal to fuel the local Iron making blast furnaces, but there was still a plentiful local supply of coal as well as Iron ore and limestone. He experimented with using coal instead of charcoal but the high Sulphur content of coal made the Iron too brittle. His greatest breakthrough was the use of coke, instead of charcoal, which produced higher quality Iron at lower cost. It could also be made in bigger blast furnaces, permitting economies of scale.

See the following Footnote about Iron and Steel Making.


Abraham Darby founded a dynasty of Iron makers. His son, Abraham Darby II, expanded the output of the Coalbrookdale Ironworks to include Iron wheels and rails for horse drawn wagon ways and cylinders for the steam engines recently invented by Newcomen some of which he used himself to pump water supplying his water wheels. His grandson, Abraham Darby III, continued in the business and was the promoter responsible for building the world's first Iron bridge at Coalbrookdale.


The mass production of low cost Ironware made possible by Abraham Darby's Iron making process was a major foundation stone on which the subsequent industrialisation of Britain and the Industrial Revolution were based.


  • Footnote
  • Some Key Iron and Steel Making Processes

    • Smelting is the high temperature process of extracting Iron or other base metals such as Gold, Silver and Copper from their ores. The principle behind the Iron making or smelting process is the chemical reduction of the Iron ores which are composed of Iron oxides, mainly FeO, Fe2O3, and Fe3O4 by heating them in a furnace, together with Carbon where the Carbon burns to form Carbon monoxide (CO), which then acts as the reducing agent in the following typical reaction. The process itself is exothermic which helps to maintain the reaction once it is started.
    • 2C + O2 →   2CO

        Fe2O3 + 3CO →   2Fe + 3CO2

      In early times the Carbon was supplied in the form of charcoal. Nowadays coke is used instead. Iron ore however contains a variety of unwanted impurities which affect the properties of the finished iron in different ways and so must be removed from the ore or at least controlled to an acceptable level. A flux such as limestone is often used for this "cleaning" purpose. By combining with the impurities it forms a slag which floats to the top and can be removed from the melt.

    • Casting is the process of pouring molten Iron or steel into a mould and allowing to solidify. It is an inexpensive method of producing metal components in intricate shapes or simple ingots. Moulds must be able to withstand high temperatures and are usually made from sand with a clay bonding agent to hold it together. The cavity in the mould is formed around a wooden pattern which is removed before pouring in the hot metal.
    • Forging is the process of shaping malleable metals into a desired form by means of compressive forces. It was a skill used for many centuries by blacksmiths who heated the metal in a forge to soften it, then beat it into shape using a hammer. Modern day forging uses machines such as large drop-forging hammers, rolling mills, presses and dies to provide the necessary compression of the work piece. Because these machines can exert very high forces on the work piece, it is also possible to work with cold, unheated metals in some applications. The forging process is not suitable for shaping cast Iron because it is brittle and likely to shatter.
    • Swaging is a special case of forging, often cold forging, to form metal, usually into long shapes such as tubes, channels or wires by forcing or pulling the workpiece through a die or between rolls. It is also the method used to form a lip on the edge of sheet steel to provide stability or safety from injury from sharp metal edges.
    • See how gun barrels were manufactured by swaging.

    • Heat Treatment
    • Heat treatment is the black art practiced by blacksmiths for hundreds of years of manipulating the properties of steel to suit different applications. These are the tools they have used.

      In its simplest form, steel is an alloy of Iron and Carbon and these two elements can exist in several phases which can change with temperature. The mechanical properties of the steel depend on the carbon content and on the structure of the alloy phases present. Heat treatment is concerned with controlling the phases of the alloy to achieve the desired mechanical properties. There are two critical temperatures between which phase changes occur, namely 700°C and 900°C

      The basic phases and phase changes in normal cast steel are as follows:

      • Steel at normal working temperature (below 700°C) is made up from pearlite which is a mixture of cementite and ferrite (Iron). Iron on its own is very soft.
      • Cementite is a name given to the very hard and brittle iron carbide Fe3C which is iron chemically combined with carbon.
      • Above the critical temperature of 700°C a structural change takes place in the alloy and the Carbon in the pearlite dissolves into the iron to form austenite which is a hard and non-magnetic, solid solution of Carbon in Iron.
      • If the temperature of the steel cools normally below the 700°C critical temperature, the transformation is reversed and the slow cooling austenite is transformed back into pearlite.
      • If however the austenite is cooled very quickly by suddenly quenching it in cold water or other cold fluid, the transformation does not have time to take place before the temperature of the alloy falls below the critical temperature. The lower transformation temperature thus prevents the transformation to pearlite and instead tends to freeze the composition of the austenite at a temperature below the crtitical temperature. This transforms the ferrite solution into very hard martensite in which the ferrite is supersaturated with Carbon. Martensite is too hard and brittle for most applications.
      • Quenching at intermediate temperatures results in a mix of martensite and pearlite leaving the steel with an intermediate hardness level.

      These transformations are exploited in the following processes:

    • Hardening - Steel can be hardened by heating it to above the crtitical temperature and suddenly quenching it in a cold liquid to produce martensite
    • Annealing - Steel can be softened to make it more workable by heating it to above the critical temperature to form austenite, then letting it cool down slowly to form pearlite. This process is also used to relieve work hardening stresses and crystal dislocations caused during machining or forming processes on the steel.
    • Tempering - The level of hardness or maleability of the steel can be set at any intermediate level between the extremes of the hard martensite and the soft pearlite to produce steel with properties tailored for different applications, from cutting tools to springs, by quenching the steel at the appropriate temperature. Starting with hard martensite, the temperature is gradually increased so that it is partially changed back to pearlite reducing its hardness and increasing its toughness. The workpiece is quenched or allowed to cool naturally when the desired temperature has been reached.

    The traditional method used for centuries for judging the temperature at which quenching should occur was by means of colour changes on the polished surface of the steel as it is heated. As the steel is heated an oxide layer forms on its surface causing thin-film interference which shows up as a specific colour depending on the thickness of the layer. As the temperature increases the thickness of the oxide layer increases and the colour changes correspondingly so that for very hard tool steel the workpiece is quenched when the colour is in the light to dark straw range (corresponding to 230°C to 240°C), whereas for spring steel the steel may be quenched when the colour is blue (300°C). Nowadays, for major tempering processes the temperature is measured by infrared thermometers or other instruments however the traditional method is still widely used for small jobs.

    • Case Hardening
    • It is difficult to achieve both extreme hardness and extreme toughness in homogeneous alloys. Case hardening is a method of obtaining a thin layer of hard (high Carbon) steel on the surface of a tough (low Carbon) steel object while retaining the toughness of its body. Essentially a development of the ancient cementation process for carbonising Iron, it involves the diffusing of Carbon into the outer layer of the steel at high temperature in a Carbon rich environment for a pre-determined period and then quenching it so that the Carbon structure is locked in.


    Summary of Iron and Steel Making Processes and What They Do

      • Bloomery - Low temperature furnace. Converts Iron ore into wrought Iron.
      • Cementation Process - Low temperature furnace. Converts wrought iron into steel by diffusion of Carbon.
      • Blast Furnace - High temperature furnace. Converts Iron ore into Pig iron.
      • Puddling - High temperature furnace. Converts pig Iron into wrought Iron.
      • Casting - High temperature furnace. Moulds molten Iron and steel output into useful shapes.
      • Forging - Mechanical process. Forms steel ingots into useful shapes.
      • Heat Treatment - Low temperature process. Changes the mechanical properties of the steel.
      • Crucible Process - High temperature, low volume process. Purifies and strengthens low quality steel. Also used to create special steels and alloys.
      • Bessemer Converter - High temperature furnace. Converts pig Iron into steel
      • Open Hearth (Siemens) Furnace - High temperature furnace. Converts pig Iron and scrap Iron into steel
      • Electric Arc Furnace - Converts scrap Iron and steel into steel.

    Iron and Steel Properties

    • Wrought Iron
    • Wrought Iron was initially developed by the Hittites around 2000 B.C. In early times in Europe the smelting process was carried out by the village blacksmith in a simple chimney shaped furnace, constructed from clay or stone with a clay lining, called a bloomery. Gaps around the base allowed air to be supplied by means of a bellows blowing the air through a tuyère into the furnace. Charcoal was both the initial heat source and the Carbon reducing agent for extracting the Iron from the ore. Once the furnace was started the Iron ore and more charcoal were loaded from the top to start and maintain the chemical reaction. It was not usually possible with this method to achieve a temperatures as high as 1300°C, the melting point of Iron, but it was sufficient to heat up the Iron ore to a spongy mass called a bloom, separating the Iron the from the majority of impurities in the Iron ore but leaving some glassy silicates included in the Iron. If the furnace temperature was allowed to get too high the bloom could melt and Carbon could dissolve into the Iron giving it the unwanted properties of cast Iron.

      Once the reduction process was complete the bloom was removed from the furnace and by heating and hammering it, the impurities were forced out but some of the silicates remained as slag, which was mainly Calcium silicate, CaSiO3, in fibrous inclusions in the Iron creating wrought Iron (from "wrought" meaning "worked"). Wrought Iron has a very low Carbon content of around 0.05% by weight with good tensile strength and shock resistance but is poor in compression and the slag inclusions give the Iron a typical grained appearance. Being relatively soft, it is ductile, malleable and easy to work and can be heated and forged into shape by hammering and rolling. It is also easy to weld.

      Because of the manual processes involved, wrought Iron could only be made in batches and manufacturing was very costly and difficult to mechanise.


    • Cast Iron
    • Cast Iron was first produced by the Chinese in the fifth century B.C.. The process of smelting Iron ore to produce cast Iron needs to operate at at temperatures of 1600°C or more, sufficient to melt the Iron. To produce the higher temperatures the bloomery furnace technique was upgraded to a blast furnace by increasing the rate of Oxygen supply to the melt by means of a blowing engine or air pump which blasted the air into the bottom of a cone shaped furnace. Early blowing engines were powered by waterwheels but these were superseded by steam engines once they became available. To remove or reduce the impurities present in the ore, limestone (CaCO3), known as the flux was added to the charge which was continuously fed into the furnace from above. At the high temperatures in the furnace the limestone reacts with silicate impurities to form a molten slag which floats on top of the denser Iron which sinks to the narrow bottom part of the cone where it can be run off through a channel into moulded depressions in a bed of sand. The slag is similarly run off separately from the top of the melt. Because metal ingots created in the moulds which receive molten Iron from the runner resembled the shape of suckling pigs, the Iron produced this way is known as pig Iron. An important feature of the blast furnace is that it enables cast Iron to be made in a continuous process, greatly reducing the labour costs. Stopping, cooling and restarting a blast furnace however involves a major refurbishment of the furnace to get it back into operation again and great efforts are usually made to avoid such a disruption.


      Iron produced in this way has a crystalline structure and contains 4% to 5% Carbon. The presence of the Carbon atoms impedes the ability of the dislocations in the crystal lattice of the Iron atoms from sliding past one another thus increasing its hardness. Pig Iron is so very hard and brittle, and very difficult to work that it is almost useless. It is however reprocessed and used as an intermediate material in the production of commercial Iron and steel by reheating to reduce the Carbon content further or combining the ingots with other materials or even scrap Iron to change its properties. Iron with Carbon content reduced to 2% to 4% is called cast Iron. It can be used to create intricate shapes by pouring the molten metal into moulds and it is easier to work than pig Iron but still relatively hard and brittle. While strong in compression cast Iron has poor tensile strength and is prone to cracking which makes it unable to tolerate bending loads.


    • Steel
    • Steel is Iron after the removal of most of the impurities such as silica, Phosphorous, Sulphur and excess Carbon which severely weaken its strength. It may however have other elements, which were not present in the original ore, added to form alloys which enhance specific properties of the steel. Steel normally has a Carbon content of 0.25% to 1.5%, slightly higher than wrought Iron but it does not have the silicate inclusions which are characteristic of wrought Iron. Removing the impurities retains the malleability of wrought Iron while giving the steel much greater load-bearing strength but is an expensive and difficult task.

      Cast steel can be made by a variety of processes including crucible steel, the Bessemer converter and the open hearth method and thus may have a range of properties. See steelmaking summary above.

      Other alloying elements such as Manganese, Chromium, Vanadium and Tungsten may be added to the mix to create steels with particular properties for different applications. By controlling the Carbon content of the steel as well as the percentage of different alloying materials, steel can be made with a range of properties. Examples are:

      • Blister Steel was a crude form of steel made by the cementation process, an early method of hardening wrought Iron. It is now obsolete.
      • Mild steel the most common form of steel which contains about 0.25% Carbon making it ductile and malleable so that it can be rolled or pressed into complex forms suitable for automotive panels, containers and metalwork used in a wide variety consumer products
      • High Carbon steel or tool steel with about 1.5% Carbon which makes it relatively hard with the ability to hold an edge. The more the Carbon content, the greater the hardness
      • Stainless steel which contains Chromium and Nickel which make it resistant to corrosion
      • Titanium steel which keeps its strength at high temperatures
      • Manganese steel which is very hard and used for rock breaking and military armour
      • Spring steel with various amounts of Nickel and other elements to give it very high yield strength
      • As well as others specialist steels such as steels optimised for weldability

      Mild steel has largely replaced wrought Iron which is no longer made in commercial quantities, though the term is often applied incorrectly to craft made products such as railings and garden furniture which are actually made from mild steel.


    Iron and Steelmaking Development Timeline

    Steel making has gone through a series of developments to achieve ever more precise control of the process as well as better efficiency.


1712 English blacksmith Thomas Newcomen built the world's first practical steam engine capable of doing dynamic mechanical work, not just pumping. It was an atmospheric engine using a piston to produce reciprocating motion. (See diagram of Newcomen's Steam Engine)

In its simplest form, a piston with a fixed connecting rod protruding from the top was mounted in a vertical cylinder above a water boiler. Steam from the boiler introduced at the bottom of the cylinder through a valve pushed the piston up to the top of its stroke. At the top of the stroke, the steam was shut off and the valve was closed trapping the steam inside. As in Savery's engine the cylinder was then cooled, in this case by spraying cold water into the cylinder under the piston to condense the steam. This is the power stroke of the piston in which condensing the steam creates a vacuum under the piston which pulls it back down to its bottom position, or in other words, the atmospheric pressure on the top of the piston pushes it down against the vacuum. This is what gives the engine the name of atmospheric engine.


The fixed piston connecting rod executed a reciprocating linear movement which could be harnessed to perform work.


In practical engines the piston rod was connected to one end a heavy beam balanced on a pivot above the engine. The power stroke of the piston produced a rocking motion of the beam pulling the end of the beam down while at the same time raising the other end of the beam. A second rod connected to the opposite end of the rod from the piston could be used to lift weights or water from great depths, however the actual lifting distance was limited by the stroke of the piston. The piston did not need high steam pressure to raise it to the top of its stroke because the unbalanced heavy weight of the lifting gear on the other end of the beam would tend to pull the piston upwards.


Before Newcomen, water pumps were horse drawn and were effective to a maximum depth of 90 feet (27 M). Newcomen's engine could draw water from several hundred feet enabling the operation of much deeper mines.

Because of the low operating steam pressures the engine was relatively safe. Efficiency however was very low because of the energy needed to reheat the steam chamber with every stroke and the time needed for heating and cooling it. Newcomen's first engine made twelve strokes per minute and raised ten gallons (45 Litres) of water per stroke. It was another 57 years before the next innovation in steam power, James Watt's separate steam condenser.


Because of the high consumption of coal to fuel the engine and its high cost, Newcomen engines were generally found only at pit heads where they were used for draining deep mines.


1713 Prolific French scientist and entomologist René-Antoine Ferchault de Réaumur invents spun glass fibres. In an attempt to make artificial feathers from glass he made fibres by rotating a wheel through a pool of molten glass, pulling out threads of glass where the hot, thick liquid stuck to the wheel. His fibres were short and fragile, but he predicted that spun glass fibers as thin as spider silk would be flexible and could be woven into fabric.

In 1731 Réaumur also invented an alcohol thermometer and a corresponding temperature scale which both bear his name. The temperature scale assigned zero degrees to the freezing point of water and eighty degrees its boiling point. The freezing point was fixed and the tube graduated into degrees each of which was one-thousandth of the volume contained by the bulb and tube up to the zero mark. It was an accident dependent on the expansion of the particular quality of alcohol employed which made the boiling point of water 80 degrees.


1714 The first Mercury thermometer was made by Polish inventor Gabriel Fahrenheit. It had improved accuracy over the alcohol thermometer due to the more predictable expansion of mercury combined with improved glass working techniques. At the same time Fahrenheit introduced a standard temperature scale based on the two fixed points of the freezing and boiling points of water.


1714 The British government established the Board of Longitude (BOL) and passed the Longitude Act which offered financial rewards of up to £20,000 (Almost £4 million in today's money) for anyone who could find a simple and practical method for the precise determination of a ship's longitude. The requirement was originally defined as a longitude error of less than 0.5° or 30 arc minutes after a journey from Britain to any port in West Indies (lasting about six weeks).

The initiative was in response to a number of maritime disasters attributable to serious navigation errors. These included the Scilly naval disaster of 1707 in which four ships of the British fleet commanded by Admiral Sir Cloudesley Shovell were wrecked on the treacherous rocks off the coast of the Scilly Isles with the loss of almost 2000 sailor's lives including that of the Admiral himself.

At the time, there were rudimentary ways of determining latitude, the North-South position on the Earth, but there was no accurate way of determining longitude, the East-West position. Dead Reckoning was the method used and this involved calculating the current position by using a previously determined position, or fix, and plotting the new position based upon the vessel's known or estimated speeds and the elapsed time and headings over the course. Apart from the difficulty of measuring the speed of a sailing ship, this method was also subject to serious cumulative errors. The cause of the disaster was blamed on the navigators' inability to determine their longitude. Shovell's ships however, entering the English channel from the South, were also many miles in latitude North of their expected course when they hit the Scilly Isles and besides this, the precise location of the Scilly Isles was not known. But the navigators did not live to tell their tale since there were no survivors and there was a pressing need to do something about finding a better way to determine longitude.


Over the subsequent years this generous longitude prize seemed always out of reach as the original 1714 Act was followed by a series of new Longitude Acts which revised or added conditions for claiming the prize and the full prize money was never paid out. The man who eventually claimed the prize, albeit in installments with the balance paid by parliament in 1773, was Yorkshire born carpenter John Harrison who worked for over three decades on solving the problem.


See also alternative methods of determining longitude.


Using a Chronometer to Determine Longitude

The idea of using a clock to determine longitude was first proposed in 1553 by Dutch cartographer Gemma Frisius.

  • In principle it was easy
  • An observer's East-West position is measured with reference to lines of longitude, or meridians which run between the North and South Poles.

    Since the Earth rotates at a steady rate of 360° per day, or 15° per hour, there is a direct relationship between solar time and longitude. (Solar time is the precise time, at a given location, calculated with reference to the apparent position of the Sun. Local time is usually considered to be the same time across an extensive time zone.)

    As the Earth revolves, the Sun's position in the sky, as seen at noon from a fixed reference point, appears to move West, at the same time declining in elevation. In one hour, the Earth will have rotated by 15° but the Sun's position is fixed. During the same hour, to an observer 15° longitude West from the original location, the Sun will appear to be arriving from the East and at the end of the hour, rising to its maximum elevation which is the local noon. At the time of this local noon, a clock at the original, reference location will indicate an elapsed time of one hour.

    By convention, the fixed reference point of 0° for longitude measurements was set on the Prime Meridian, an imaginary line running between the Poles and passing through Greenwich near London, and the reference time was known as Greenwich Mean Time (GMT) or more recently Coordinated Universal Time (UTC) or Zulu Time by the Military. The scale of longitude ranges from 0° at the prime meridian to +180° eastward and −180° westward.


    Thus the difference between the apparent local solar time at any location in the world and GMT can be used to calculate the longitude with each minute of time difference corresponding to 0.25°, or 15 arc minutes difference in longitude equivalent to 15 nautical miles at the equator.


    Notes:

    • The length of the nautical mile was defined in terms the scale of longitude and the circumference of the Earth at the equator. The 360 degrees of longitude correspond to 360*60 = 21.600 arc minutes and one nautical mile was defined as being equivalent to one minute of longitude at the equator.
    • Measured in statute miles, the circumference of the Earth at the equator is 24,901 miles. Thus 1 nautical mile ≡ 1.15 statute miles.

    • At any latitude above or below the equator, the longitude lines get closer together as the diameter of the Earth decreases with increasing latitude so that the East-West distance corresponding to one degree of longitude decreases from one nautical mile at the equator to zero at the Poles.
      • Example 1 The latitude of Greenwich is 51.48° North. At this latitude the circumference of the Earth is 13,504 nautical miles and one minute of longitude will correspond to 0.625 nautical miles.
      • Example 2 To win the BOL's top longitude prize of £20,000, after an Atlantic crossing to Barbados, situated at 13.19° North, the 30 arc minute longitude error allowed would correspond to a timing error of 2 minutes in time or 29.2 nautical miles (33.7 miles) error in position.
      • For a six week journey, the average timing error (gain or loss) of the ship's chronometer must be less than 2.8 seconds per day to meet the target timing error of less than 2 minutes.

    • The above calculations assume that the Earth's orbit is circular, but the orbit is actually elliptical, not circular so that adjustments must be made from navigation tables.

  • In practice it was difficult
  • Finding the apparent local time was relatively easy by setting the local noon at the time when the Sun was at its highest elevation. The difficulty was in determining the time at a distant reference point such as GMT while on a ship many weeks or months away from port. At that time, the best timekeepers were pendulum clocks but such clocks were useless at sea. There were no clocks that could maintain accurate time during long sea journeys while being subjected to the rolling, pitching and yawing of a sailing ship.


  • Accuracy
  • A timing error of one minute in either the ship's chronometer, or the local measurement of solar time, will result in an error in the longitude measure of 15 arc minutes, no matter how close to, or how far the ship is from its reference point (such as the Greenwich Meridian) and no matter what course the ship has followed to its current location. The major influence is the elapsed time between synchronising the chronometer with the reference time (e.g. GMT) and the current solar time. This is because the timing error of the chronometer is cumulative over time. The longer the ship is at sea, the more the inaccuracy of its longitude measurements.


Harrison's Early Clocks

Self-taught John Harrison was brought up in the small village of Barrow in Lincolnshire. An independent minded outsider throughout his life, he was driven by a passion to produce the most accurate and reliable timekeepers and a sheer determination to succeed. For fifty years he produced a series of innovative advances in timekeeping technology culminating with his recognition for solving the longitude problem.

He completed his first pendulum clock in 1713 when he was only 19 years old. Clock making and repairing were initially however only his spare time activities as he followed his father's trade as a carpenter and he did not take up the challenge of designing a marine chronometer in 1714 when the longitude prize was announced. It is not known whether he was even aware of the prize at the time.


Isolated and far from Britain's clockmaking community, his first clocks made before 1720 were all pendulum clocks and used conventional anchor escapements but apart from that they were far from conventional being made almost entirely of wood including the frame, gear wheels and pinions. Three of these clocks have survived and are held in UK collections at the Worshipful Company of Clockmakers and the Science Museum in London and and Nostell Priory near Wakefield.


The Brocklesby Park Clock

A major step forward was the commission to build an outdoor turret clock for the stables of the Earl of Yarborough. A serious issue with early clocks and watches was friction which caused the mechanisms to slow down. Friction also causes wear which leads to erratic tumekeeping. The solution was lubrication, but this brought its own serious problems. Lubrication reduced the friction for a short period but early lubricants were derived from animal fats which soon deteriorated and thickened with age gathering dust and clogging up the gears. The Brocklesby Park Clock was designed to run without lubrication with minimal friction.

Its unique features included:

  • The use of lignum vitae, a dense oily tropical hardwood, for bearings reduced friction and eliminated the need for lubrication.
  • Gear wheels of oak and box wood, except for the escape wheel which was brass.
  • Gear teeth in small groups mortised into the rim of the gear wheels with the grain in a radial direction to provide maximum strength.
  • A specially designed grasshopper escapement which eliminated the friction between sliding parts by means of a spring mechanism which caused the pallets to jump clear of the escape wheel and thus avoid the need for lubrication.
  • The main driving pinion was in the form of a lantern gear with teeth in the form of tiny lignum vitae rollers, mounted on brass pins so that the teeth made rolling contact with the mating gear wheel.

The clock was finished in 1722 and is still working today in its original location above the stables. Amazingly after almost 300 years of continuous working, it has still not been oiled.


Precision Long Case Clocks

Beginning 1725, working with his younger brother James, Harrison continued the quest for better timekeeping with the design of three long case (grandfather) clocks. His next major innovation in 1726 was temperature compensation which he implemented in these clocks.

Huygens had shown in 1656 that the period of a pendulum is proportional to square root of its length. Harrison was aware that increasing temperature would cause the length of a pendulum to increase and thus cause a clock lose time. He therefore devised the gridiron pendulum using two metals with different coefficients of expansion, arranged in such a form that the metal with the greatest expansion would expand in the opposite direction compensating for the expansion in the other metal so that the length of the pendulum was held constant and the clock kept good time. See a diagram of the gridiron pendulum.


The timekeeping accuracy of these clocks was so good that there were no reference timers accurate enough to measure their performance. He therefore had to check their timekeeping accuracy against apparent star movements. For this he noted the time when a reference star passed behind a fixed object (his neighbour's chimney stack) on subsequent nights. Sidereal time is the time based on the Earth's rotation relative to fixed stars rather than the Sun's orbital position and is easier to observe than the bright Sun. A mean sidereal day is 23 hours, 56 minutes and 4 seconds long, which means that a reference star would pass behind the chimney 3 minutes and 6 seconds earlier each day providing Harrison with a very precise timing reference.

He determined that his three clocks achieved the astonishing accuracy of one second error per month, far exceeding the accuracy of a few seconds error per day achieved by the best London clocks of the day.


Their accuracy was also many times better than the 2.8 seconds per day accuracy needed to win the longitude prize which no doubt piqued Harrison's interest. If only he could get rid of the troublesome pendulum!


Harrison's Clockmaking Resources

Harrison achieved his remarkable developments with the most meagre of resources.

  • There were no simple mathematics to analyse the dynamic performance of the moving parts of the clock mechanisms when subject to random external forces.
  • There were no published data on the performance of materials and structures such as, tensile strength, elasticity, coefficient of expansion or the affects of temperature, humidity and mechanical shock.
  • The lack of published data meant that he had to generate the data himself or proceed by "trial and error".
  • Without data and analytical tools it was easy to be diverted down blind alleys.
  • There were no high performance materials such as plastics or lubricants.
  • The materials which were available were of variable quality.
  • He had to make every single component himself including, gear wheels, springs, screws, spindles, bearings, casings, mounting plates, winders, pointers, pendulum rods and mounts, even the links in the fusee chains.
  • Tools in Harrison's time were still quite rudimentary and like all craftsmen of the period, he had to make his own. It was another century before the simple twist drill bit was invented.
  • With the only means of making precise timing measurements being by the observations of star movements at night, it could take weeks to verify the effect of minor adjustments.
  • All of these issues meant that progress was extremely slow.

Countering all these shortcomings, the greatest resource was Harrison himself.

  • He was innovative, self reliant and doggedly determined. If he encountered a technical problem he would design an alternative solution to avoid it, but if this was not possible he would design a method to compensate for it. His quest for the perfect, friction free timekeeper was never ending.

Harrison's Marine Chronometers

By 1728 nobody had come up with a viable solution to the longitude problem and the Longitude Prize was still unclaimed. The best portable timekeepers of the day were watches and their accuracy was worse than one minute per day while Harrison's pendulum clocks were capable of better than one second per month. Harrison was confident that he could produce a portable ship's clock which could meet the Board of Logitude (BOL) requirement of 2.8 seconds per day and set to work on plans for such a clock. He took the plans to London, his first ever trip South, to seek funding from the BOL and the advice and support of Edmond Halley the Astronomer Royal and member Board.

The BOL members included 6 top navy men, the potential users, 12 members of parliament who looked after the nation's purse strings and 6 top astronomers, mathematicians and academics to assess the technical merits of the proposed solutions. Halley warned that this, latter, technical group favoured astronomical navigation methods and were not well disposed to mechanical devices and while he was sympathetic, he advised Harrison to seek funding elsewhere and suggested that he visit George Graham, the country's foremost clock maker. Despite having his own deadbeat escapement which rivalled Harrison's grasshopper design, and having failed in his own attempts to produce a working temperature compensation design himself, Graham was helpful and lent Harrison £200, interest free, to start work on his ship's clock. Halley also remained an important supporter of Harrison.


Over a period of 30 years Harrison produced a series of four different marine chronometers, later designated as H1 to H4 and a copy H5. See photographs of Harrison's Marine Chronometers


H1 Chronometer

Harrison's first chronometer, H1, was started in 1730 and completed in 1735. The objective was to make a seagoing version of his wooden pendulum clocks. He retained the wooden gear wheels with anti-friction bearings and roller lantern pinions as well as the grasshopper escapement. The rest of the ideas were all new.

  • To make the machine completely independent of gravity and the motion of a ship, it was spring-driven, with all moving parts counterbalanced and controlled by springs.
  • The main driving power came from two mainsprings spaced 180° apart connected through a single fusee (see diagram) to even out variations in the spring forces of the two springs and to minimise the unbalanced force on the fusee.
  • The main gear wheels rotated on unusual friction free "open" balanced roller bearings of Harrison's own design.
  • The pendulum was replaced by a timing oscillator consisting of two 5 pound, dumbbell shaped rocking bar balances linked together by cross wires and oscillating opposite eachother in anti phase so that the effects of the rolling motion of the ship on one bar would be compensated by the effects on the other bar.
  • Two helical springs connecting the upper ends of each dumbbell bar and another pair connecting the lower ends provided the impulse and restoring forces to keep the dumbbells in motion.
  • Temperature compensation was provide by attaching each balance spring by a lever to a version of the gridiron compensator, the first ever application in a balance spring regulator.
  • Harrison also invented the going fusee, a mechanism for the H1 which kept it going while being wound up. Known more generally as maintaining power it has been used extensively in spring-driven clocks and watches ever since.

The H1 was made from 1,440 parts, over 5,400 if the chain links are included, and weighed 34 Kg.

In use it was mounted on gimbals and ran for 38 hours on one winding.


H1 Chronometer Sea Trials

Sea trials were belatedly arranged by the Admiralty in 1736 with a journey to Portugal rather than the specified West Indies necessary to claim the prize.

The clock did not perform well in rough seas during the one week outward journey in rough seas to Lisbon and Harrison even less so being seasick the whole time. The return journey which took one month in mixed weather was more successful. When the English coast was sighted, the ship's Master, Roger Wills, and his officers, having used traditional navigation methods identified it as Start Point, just East of their destination, Portsmouth. But Harrison's own chart, plotted using H1, placed them correctly 68 miles further West, at Lizard Point and potentially in peril. By coincidence Will's error was similar to the one which caused the demise of Admiral Shovell who ran into the Scilly Isles 55 miles West of Lizard Point.


The accuracy of Harrison's navigation was acknowledged by Wills who reported positively to the BOL. (The timekeeping accuracy of H1 was subsequently estimated as between 5 and 10 seconds per day). This was not enough to claim the longitude prize, but it was the first workable marine timekeeper and the BOL were sufficiently impressed that in 1737 he was awarded £250 to continue his experiments and the promise of £250 more on successful completion of a second approved machine. This enabled him to start work on H2, a more rugged and compact version of the H1. This was the first ever government sponsored Research and Development programme.


H2 Chronometer

In 1736 Harrison moved to London, closer to the clockmaking community, to start work on H2. It followed the same basic design as H1 using a grasshopper escapement but with all the wooden parts changed to brass and improved gridiron temperature compensation. It ended up being taller and heavier than H1.

It did however have one further innovation. In 1739 Harrison invented the spring remontoire, a more controlled, secondary driving force which improved timekeeping regularity by separating the sensitive escapement from the main driving force thus avoiding variations in the driving force due to the mainspring winding down or caused by small errors in the manufacture. In the H2 the remontoire spring was rewound every 3 minutes 45 seconds.


In 1741, after three years of building and two of testing, H2 was ready for sea trials, but Britain was at war with Spain in the War of Austrian Succession and the trial was postponed because the government deemed that the clock was too important to risk falling into enemy hands. Shortly afterwards Harrison came to the conclusion that the H2, like the H1, was too cumbersome and the slow moving heavy dumbbell balances could not fully cancel all the possible ship's motions as expected. It was reluctantly abandoned and never submitted for sea trials.

In the meantime, he had already started work on a new sea clock, H3, with circular balance wheels instead of the heavy rocking arms, for which he requested, and received, further grant of £500 from the BOL.


H3 Chronometer

Starting in 1740, Harrison spent 19 years working on H3 during which the BOL supported it with grants totalling £3000, before it too was also abandoned.

It was smaller and lighter than the previous two clocks and used a similar grasshopper escapement and a 30 second remontoire, but the large heavy balance wheels were just as susceptible to disturbance by the sea's forces as the previous balance bars. Another major difficulty was the lack of detailed theoretical knowledge of the properties of springs. It was not until 1807 that the notion of elasticity was defined and quantified by Thomas Young. The H3's two balance wheels were mounted one above the other and linked together by cross wires. A single, short, spiral balance spring controlled the upper wheel only in place of the four helical springs controlling the balance bars of the H1 and H2 and Harrison was unable to get this mechanism to work isochronously so that he was unable to achieve the necessary timekeeping accuracy.


Nevertheless, during this development period Harrison invented two new mechanisms for the H3 which are still used today. These were:

  • The Caged Roller Bearing in which the wheel shaft rotates between four bronze rollers held in a light brass cage so that there is only rolling motion and no sliding friction between the shaft and the bearing. This was the forerunner of the ubiquitous modern ball bearing.
  • The Bimetallic Strip which Harrison called his "thermometer curb". Constructed from brass and steel it bends under the influence of temperature, (See diagram) and this movement was used to shorten or increase the length of the balance spring. Shortening the length of the spiral spring increases its stiffness and compensates for the weakening of the spring as the temperature increased. Increasing the spring's length to compensate for the cold has the opposite effect.

The "Jefferys" Watch

While still struggling with the H3, in the early 1750s Harrison turned his attention to watches and designed a precision watch for his own personal use, which was made for him by the watchmaker John Jefferys. Completed in 1753, it used a novel vertical, recoil free, frictional rest escapement, similar to the verge balance spring escapement and was the first to incorporate in a watch some of the innovations developed for Harrison's clocks including temperature compensation and the going fusee.


Surprised by the accuracy of the watch's timekeeping, he began to realise that for over 20 years he had been working on the wrong track with his three sea clocks and that a watch would better satisfy the BOL requirements for a "practical" solution. He came to the conclusion that the secret to stability was small high frequency oscillators and that the large heavy balances in his sea clocks could not oscillate quickly enough to ensure stable timekeeping and that a smaller watch could oscillate at a much higher speed. This was one of Harrison's great insights.

He therefore admitted defeat and turned his attention to the design of a sea watch, H4.


H4 Chronometer

In 1755 Harrison requested a further grant from the BOL to complete the H3 and to produce two sea watches, the H4 plus a smaller version. The BOL, still supporting the project, approved a grant of £2,500.


The H4 Sea Watch is housed in a silver case 13 cm (5.2 inches) in diameter like a large pocket watch and weighs 1.45 kg. It was based on Harrison's "Jefferys" Watch with the following innovations:

  • It had a high energy isochronous escapement which made it less affected by the slower ship's motions. This was accomplished by means of a heavier balance wheel with a greater amplitude swing of ±145° oscillating five times per second so that it carried much more kinetic energy making it less vulnerable to physical disturbance.
  • The escapement was driven by a remontoire, rewinding eight times a minute, to even the driving force.
  • A balance-brake stops the watch 30 minutes before it is completely run down, in order that the remontoire does not run down also.
  • Because the watch was too small to incorporate Harrison's anti-friction devices some of it's bearing surfaces required oil, however wherever possible jewelled (ruby and sapphire) bearings were fitted to reduce friction.
  • Diamonds were used for the surface of the escapement pallets.

In common with the Jefferys Watch it also had temperature compensation by means of a bimetallic strip and maintaining power by means of a going fusee.


The H4 Sea Watch was completed in 1759 and was submitted in 1760 to the BOL for sea trials. They awarded Harrison £250 to prepare and carry out the trials of H3 and H4 on a voyage to Jamaica in 1761. It had taken six years of development and testing.


Rival Methods

Meanwhile, German astronomer, Tobias Mayer had developed an alternative method of determining longitude, originally suggested in 1514 by another German astronomer Johannes Werner. Known as the lunar distances method, it was based on the position of the Moon relative to other fixed celestial bodies. Because the moon orbits the Earth in a regular orbit at around 15 degrees per day, its current position (angle) relative to a reference star, compared to its known position relative to the same reference star, as seen from some other terrestrial reference point such as Greenwich, could be used to calculate the current time difference between the two points. From the time difference, the longitude could be calculated. Unfortunately it took about four hours to perform these calculations by which time the ship would have moved to a new position.

The method only needed a sextant to make the observations and did not need an expensive chronometer.

In 1752 Mayer published initial tables of lunar distances which he had calculated. The latest update of these tables had also been sent in 1755 to Britain's current Astronomer Royal James Bradley who became a staunch advocate of the method. The recent invention of the sextant in 1757 had also improved the practicality of making the necessary celestial measurements, strengthening the case. The sextant was also much less expensive then the chronometer.


Notes: In 1612, Galileo had proposed a much simpler and accurate way of determining longitude based on observations of Jupiter's natural satellites, but such observations were impractical from a ship at sea.

In modern practice, a nautical almanac and nautical tables enable navigators to use the Sun, Moon, visible planets or any of 57 navigational stars for celestial navigation.


In 1760 the Royal Society appointed astronomer, the Reverend Nevil Maskelyne, to undertake an expedition to St Helena to observe the 1761 Transit of Venus with the objective of calculating the distance between the Earth and the Sun. Maskelyne used the opportunity to verify Mayer's method of lunar distances for calculating longitude and after his return he published British Mariner's Guide in 1763 explaining the method and showing some example lunar distances. This was followed by the Nautical Almanac in 1767 in which he provided more comprehensive tables of computed lunar distances from the Moon to the Sun and seven stars, every three hours for the whole of 1767.

Based on the Mariner's Guide, Maskelyne staked his claim for the Longitude prize and was supported by the current Astronomer Royal James Bradley who had succeeded Halley as Astronomer Royal in 1742.


With these developments just beginning in 1760, the astronomers were also preparing their bid for the prize, and the sea trials of the H4 were delayed by Bradley until late 1761. By then Harrison was 68 years old and the H3 and H4 chronometers were sent on their journey to Jamaica in the care of his son William.


H4 Chronometer Sea Trials

It is not unusual for a timekeeper to have a fixed rate of time loss or gain, called the "rate". What is important is that the rate should not vary. If it is fixed it can be allowed for.

Before the trial, the H4 chronometer was calibrated by the Naval Academy at Portsmouth and determined to be 3 seconds slow with a fixed "rate" of time loss of 24/9 seconds per day.


During the first leg of the journey to Madeira, after 9 days out, the ship had run out of key provisions. Harrison predicted landfall the following day but Captain Dudley Digges disagreed, pointing out that, by his calculations, they were over 100 miles from Harrison's position and wagered that he was over 100 miles in error. When land was sighted the following morning, the young Harrison was proved right and Digges honoured his bet and offered to buy the first available chronometer of Harrison's design.

Continuing on their journey, they reached Kingston in Jamaica in 1762 after a total of 81 days and 5 hours at sea while the ship's log showed them to be well over 100 miles away. After allowing for the accumulated daily "rate" of time loss amounting to 3 minutes 36.5 seconds and an initial error of 3 seconds, Harrison's chronometer had lost only 5.1 seconds over the whole period as determined by solar measurements. This corresponded to an error in longitude of only 1.25 arc minutes, or approximately 1 nautical mile, compared with the known longitude of Kingston and well within the BOL requirement of 2 minutes in time or 30 arc minutes (0.5 degrees).


When the ship returned in 1762, Harrison expected to receive the £20,000 prize but he was sorely disappointed. His previous support from the BOL had evaporated. His original supporters Halley and Graham had been dead for several years and the BOL was still dominated by astronomers led by Bradley, the Astronomer Royal, who favoured the lunar distances method of determining longitude. The BOL came up with numerous arguments not to pay and demanded another trial.

  • The results were too good to be true.
  • The demonstrated accuracy was down to luck.
  • A timekeeper which took six years to construct did not meet the test of practicality required.
  • The location of Kingston was not known accurately.
  • The calibrated "rate" loss had not been declared before the voyage, implying that it must have been chosen after the event to fit the desired result.
  • It must have been a fluke.
  • Positive and negative errors had cancelled out.

Their conclusion was that there was insufficient evidence from the sea trials to qualify for the prize and that the chronometer should be subject to a second sea trial to prove the accuracy and viability of the watch.

Harrison was awarded £1,500 for the progress and promised a further £1,000 on completion.


The Second H4 Sea Trial

After much bitter argument it was agreed that he second trial would be a journey to Bridgetown in Barbados. Harrison was given 4 or 5 months to prepare and to calibrate the loss "rate" and the journey would take place in 1764 with the H4 in the care of Harrison's son William.

Much to Harrison's annoyance Maskelyne, his competitor for the prize, was sent to Barbados in 1763 to confirm its exact longitude using observations of Jupiter's satellites and, during the journey, to verify the suitability of Mayer's latest lunar distance tables for determining longitude. Such a conflict of interest would never be allowed today.


Before the journey Harrison gave calibration "rates" from 3 seconds per day gain at 42°F to 1 second per day loss at 82°F or an average of 1 second per day gain.

After a voyage of 47 days the timing error was just 39.2 seconds after the correction for "rate". This was less than one second per day and corresponded to an positional error of 9.8 miles (15.8 km) at 13.19° North, the latitude of Barbados. This was three times better than the performance needed to win the full £20,000 longitude prize.

By comparison Maskelyne's calculations based on lunar distances were also reasonably close with a positional error of 30 miles (48 km) at Barbados but they required several hours of calculations to determine the each position during the journey.


On the ship's return to Portsmouth after a two way journey of 156 days, and applying the average predetermined rate correction of 1 second per day, the watch had gained 54 seconds amounting to a third of a second per day. If the declared variable rate corrections for the temperature changes had been applied, the error would have been less than one tenth of a second per day. Surely enough to claim the prize. But Harrison was to be thwarted once more.


The Final Hurdles

By the time of the BOL review of the trial in 1765, Maskelyne had been appointed Astronomer Royal. In his report about the trials Maskelyne gave a negative report about the watch claiming once again that the accuracy of the measurements was attributed to luck and that the watch did not meet the needs of the BOL. The BOL consequently insisted that Harrison was only eligible for half of the prize money and applied a new set of conditions with which he must comply before he could even be awarded that.

The matter eventually reached Parliament, which offered Harrison £10,000 in advance and the other half once he handed over the design to other watchmakers to duplicate what had originally been considered to be a military secret. In the meantime he must disclose full design details of the mechanism to a BOL scientific committee and the watch would have to be handed over to the Astronomer Royal for long term testing. Eventually he reluctantly agreed and was awarded £7,500 since he had already received £2,500. Mayer was posthumously awarded £3,000 for his lunar distance method and tables.


Maskelyne who had not given up his own claims to the longitude prize, in 1766 produced a government warrant confiscating Harrison's three remaining timekeepers which were to become public property and subject to rigorous testing. Needless to say, they were treated very roughly. H4 had already been dismantled for disclosure to the board and was in need of cleaning and adjustment. After a 10 month trial H4 had gained 1 hour, 10 minutes and 27.5 seconds. Based on this Maskelyne pronounced that the watch could not be relied upon to keep the longitude on a six week journey to the West Indies despite the fact that it had already been demonstrated in twice in practice.

In 1766, in response to Harrison's claims for the second £10,000, the BOL also insisted that he must arrange the production of two copies of the sea watch to prove it was not a fluke. The first, known as K1 was made by watch maker Larcum Kendall and completed satisfactorily in1769. Kendall had been a member of the BOL's scientific committee who had reviewed the H4 watch. The BOL insisted that the second copy had to be made by john Harrison himself. He was now 73 years old.

In 1767 the BOL published "The Principles of Mr Harrison's Timekeeper" making public the results of over 30 years of his work.


The H5 Chronometer

H5 was the copy of H4 which was demanded by the BOL and it was completed by Harrison in 1772 when he was 79 but the BOL still refused to pay up. The unschooled carpenter from the North was always at a disadvantage when arguing with the capital's aristocracy.

Frustrated and angry, Harrison appealed to the King, George III who was appalled by their treatment. In response he conducted his own private tests on the H5 watch, monitoring it daily. It performed superbly losing only 4.5 seconds in two months. Nevertheless the BOL refused to recognise the results of this independent trial. As a result the King advised John and William, to petition Parliament, threatening to appear in person to support their claim. In June of 1773, by Act of Parliament, the government finally awarded the £8750 which exceeded the balance of the £20,000 still owing.


The Board of Longitude Prize was never awarded.


Epilogue

The development of the first true chronometer was the life's work of one man, John Harrison, who never gave up despite numerous disappointments and setbacks during 31 years of persistent experimentation and testing. Harrison's chronometers revolutionised seafaring in the eighteenth century.

Initially they were very expensive. The K1 cost £450, an enormous sum at a time when the cost of a new ship was only around £10 per ton of displacement, but prices began to fall as chronometer's value was recognised and they became the preferred method for determining longitude.


The K1 was given to explorer Captain James Cook to trial on his three year (second) voyage of discovery to the South Sea Islands and subsequently used by him on his third voyage, having used the lunar distance method for navigation and surveying on his first voyage. He found it exceeded his expectations and became a great advocate for the chronometer. A second copy K2 was used by Lieutenant William Bligh, Captain of HMS Bounty, but taken by Fletcher Christian during his infamous mutiny in 1789.


You can still see Harrison's original sea clocks and watches.

H1, H2, H3, H4, K1 and K2 are displayed at the UK National Maritime Museum, Greenwich, London

H5 is held at the Worshipful Company of Clockmakers in London.


1725 French weaver Basile Bouchon used a perforated paper roll in a weaving loom to establish the pattern to be reproduced in the cloth. The world's first use of manufacturing automation by using a stored program to control an automated machine.


1728 Another French weaver, Jean Falcon worked with Bouchon to improve his design by changing the perforated paper roll to a chain of more robust punched cards to enable the program to be changed more quickly.


1729 English chemist Stephen Gray was the first to identify the phenomenon of electric conduction and the properties of conductors and insulators and the first to transmit electricity over a wire. In an experiment, a young boy across laid across two swings suspended by silk ropes which insulated the boy electrically from the ground. The boy's body was charged up from a Hauksbee machine and when the boy held his hand above flakes of gold leaf on the floor, the flakes were picked up by electrostatic attraction to his hand. Thus electric charge was thus shown to be conducted through the boy's body to his hand but not through the insulating silk ropes to the ground.

Gray subsequently sent charges nearly 300 feet over brass wire and moistened thread and showed that electricity doesn't have to be made in place by rubbing but can also be transferred from place to place with conducting wire. An electrostatic generator powered his experiments, one charge at a time. The fore-runner to the electric telegraph.


1730 The octant, forerunner of the sextant was independently invented by English mathematician, John Hadley, and Thomas Godfrey, an American glazier in Philadelphia. The instruments enabled the precise measurement of the angle between two distant landmarks as seen by the observer. Their prime application however was for navigation where they were used to determine the angle of elevation between a celestial object and the horizon.


The principle of the "reflecting quadrant" or "octant", a doubly reflecting optical instrument, was first described in detail by Isaac Newton in 1699 in a letter to Edmond Halley, Britain's Astronomer Royal, but the description was not published until after Halley's death in 1742. The first sextant was made by London instrument maker John Bird in 1757. It was simply a scaled up version of the octant, requested after sea trials by British Admiral John Campbell who found the octant's 90° measurement range was too restrictive for lunar measurements and asked for it to be increased to 120°.


Mariners had for centuries used the principles of celestial navigation as a basis for determine their latitude by measuring the angle of elevation above the horizon of the Sun at solar noon, or Polaris, the North Star, at night (in the Northern hemisphere), but their instruments, ranging from the cross staff and astrolabe to a simple tilting quadrant scale with a plumb bob, were very inaccurate. They were also difficult to use since while standing on a pitching and rolling ship, the user had to simultaneosly observe the horizon and the target celestial object which both both move around in the observer's field of view.

The sextant, an optical instrument based on two reflecting mirrors, greatly improved the accuracy and simplicity of making these navigation sightings by superimposing the images from the horizon and the target in a single viewfinder. In this way the relative position of the two images remains steady in the viewfinder of the sighting telescope making the observation much easier to manage as the ship pitches and rolls.


The invention of the sextant was a major step in improving safety at sea. Sextants are still used today as emergency back-up in case of failure of modern electronic navigation systems. Unlike GPS satellite navigation systems they are completely autonomous and don't need electricity to get a fix on a position. They are even used for navigation in space where they provide precise calibration for correcting the drift in the guidance system which can occur with spacecraft inertial navigation platforms.


How it Works

The marine sextant enables the observer to view both the horizon and the target celestial object simultaneously. Light from the horizon enters the sextant's sighting telescope directly while light from the target object is directed via a tilting mirror into the same telescope and superimposed on the image of the horizon. By tilting the mirror, the image of the target object can be brought into line with the image of the horizon and the measured angle of tilt is used to derive the angular elevation of the target.


See a diagram of a Sextant illustrating its workings.

The sextant has two lines of sight, one from the horizon and one from the target navigational marker (either the Sun or a star). The line of sight from the horizon, known as the boresight, is a straight line passing along a fixed path directly into the sighting telescope via the transparent half of a "half-silvered horizon mirror" which splits the view horizontally and provides a full view of the horizon. Alternatively the view may be split vertically by means of a "half-horizon mirror" through which the path of the horizon line of sight passes through its clear side.

The line of sight of the target object (the Sun or the star) is reflected from the "index mirror" onto the horizon mirror which in turn redirects it into the sighting telescope so that the images of the target and the horizon are superimposed. By adjusting the angle of the index arm, the image of the target can be lined up with the horizon. The angle of elevation of the target can then be read off from the graduated scale. A 1° movement of the index arm corresponds to a 2° difference in the elevation of the line of sight to the target. This is because the change in the angle between the incident and reflected rays on the index mirror is double the change in the angle of incidence of the rays on the mirror caused by rotating the index arm. (Angle of reflection = Angle of incidence). Thus the scales of the octant which covers an arc of 45° and the sextant which covers an arc of 60° are graduated from zero (or below) to 90° and 120° respectively.

Filter glasses can be moved into the optical paths to reduce the intensity of the Sun's rays in order to protect the user's eyes from harm.


The sextant's graduated scale will indicate the angle of elevation, confusingly called the altitude or the height even though it is measured in degrees, of the target celestial object above the horizon. The apparent position of the Sun in the sky varies with the seasons, in the northern hemisphere being higher in the winter than in the summer and it varies with the time of day being at its highest at noon. The actual latitude must therfore be determined from navigation tables which show the true latitude corresponding to the elevation measured, with correction factors depending on the month and day of the year and on the precise time of the day as registered by the ship's chronometer when the sighting was taken. At the same time the tables also provide the ship's longitude corresponding to the noted chronometer reading. Thus the ship's complete geographical position can be determined.


Finding Latitude Using Polaris

The line of sight to the horizon at any point on the Earth is very close to a tangent to the Earth's surface, (see corrections below).

Polaris is a distant star in the northern sky lying on a line coincident with the axis of the Earth. As the Earth makes its annual orbit around the Sun and makes its daily revolution on its axis, Polaris appears to be stationary in the sky on a line perpendicular to the Earth's equator, passing through the North Pole. It is so far away that light rays impinging on the Earth appear to be parallel.

For an observer situated on the equator, Polaris will appear to be exactly on the northern horizon and the angle of incidence between the horizon and Polaris, its elevation, will be zero since the line of sight of both the horizon and the star are at right angles to the Equator. For an observer at the North Pole, Polaris will appear to be directly overhead with an angle of incidence or elevation at 90° to the line of sight to the horizon. These two elevations correspond to the latitudes at those points. At any intermediate point between the North Pole and the equator, the elevation indicated on the scale of the sextant corresponds directly to the true latitude of the location.

Unfortunately there is no equivalent South Pole Star and alternate methods of determining latitude must be used.


Finding Latitude Using the Sun

Because the reference position of the Sun is in the plane of the equator, the measured angles of elevation will be displaced by 90° from the angles measured using the Polaris reference. Thus at noon on the vernal or autumnal equinox (when the daytime and night time are approximately equal), on the equator (latitude zero) the Sun will be directly overhead and the sextant will indicate the angle of elevation of the Sun to be 90°. At the same time, since the distant Sun's rays are essentially parallel, at the North and South Poles (90° latitude) the Sun will appear to be on the horizon and the sextant will indicate the Sun's angle of elevation to be 0°. At the Poles, and any location in between, the latitude can be determined by subtracting the sextant reading from 90°.

But this is not a practical way of determining latitude since equinoxes occur only two times per year. Using the Sun to determine latitude is much more complicated because the Sun does not appear as a stationary reference target like Polaris does. There are two reasons for this.


The first is that the Earth's axis is tilted at a fixed angle of 23.45° with respect to the plane of its orbit around the Sun so that, as it makes its 12 monthly orbit, the highest position of the noon-day Sun, as seen from the Earth, appears to move between 23.45° above the equator and 23.45° below the equator as the Earth moves between opposite sides of the Sun. See diagram of Earth's tilted orbit.

In the Northern hemisphere, at noon on the summer solstice, (the longest day), the Sun will be directly over the Tropic of Cancer at a latitude of 23.45° North. At noon on the winter solstice, (the shortest day) the Sun will be directly over the Tropic of Capricorn at 23,45° South. These observations are mirrored in the Southern hemisphere.

The apparent position of the Sun or other celestial object above or below the Earth's equator is known as its declination and the solar declination depends on the angular distance of the Earth around its orbit of the Sun, in other words, on the date.


The second variation arises because the Earth is rotating once per day so that the Sun appears from over the horizon at dawn, rising to its highest elevation at noon, then declining and disappearing below the horizon in the evening. Thus the observed elevation of the Sun depends on the time of day. For consistency and simplicity, sightings are normally taken at noon when the Sun appears at its highest position in the sky. At any other time, corrections must be applied for the declination due to the time of day.


True latitudes on any particular day are therefore determined from published navigation tables, which show the solar declination for every day of the year, by applying the following calculation:

  • Latitude = (90° - Sextant Angle) + Declination of the Sun if the observer is in the same hemisphere as the Sun
  • Latitude = (90° - Sextant Angle) - Declination of the Sun if the observer is in the opposite hemisphere from the Sun

Corrections for time and minor corrections for the height of the observer above the Earth's surface must also be applied. Any small perturbations in the Earth's orbit are already taken into account in the basic navigation tables.


Before the availability of accurate chronometers such as those first pioneered by John Harrison, the sextant was also used to determine the time and hence the ship's longitude by measuring the angle between the Moon and other celestial objects, the so called "lunar distance". Because the Moon makes regular orbits of the Earth once every 27.32 days, its position can be used as a timing reference. Greenwich time corresponding to the observed lunar distance could then be found from a nautical almanac and from the difference between the Greenwich time and the local time the longitude could be calculated.


Accuracy

The accuracy of the sextant depends on the precision and skills of the instrument maker. The measurement accuracy of Bird's sextant was 2 arc minutes. This corresponds to a possible latitude error of about 2 nautical miles. Modern sextants typically have an measurement accuracy of around 0.1 arc minutes or 0.1 nautical miles which is about 200 yards. At sea, results within the visual range of several nautical miles are often considered acceptable. There is also the possibility of user set up errors but adjustments are usually provided to correct this.

Correction Factors

Besides the accuracy of the instrument itself, there are several further factors also affecting the accuracy of the measurement. The line of sight to the horizon of the ocean is not a true tangent to the Earth's or the sea's surface, but depends on the height of the sextant telescope, or the observer's eye, above the surface. This correction known as the "dip" must be subtracted from the sextant reading. The dip in arc minutes is given by:

Dip correction = - 1.76√eye height in metres

or

Dip correction = - 0.97√eye height in feet

Thus for a reading taken 5.5 metres or 18 feet above sea level from the deck of a ship, the dip correction will be - 4.1 arc minutes corresponding to an adjustment in the calculated latitude of 4.2 nautical miles.

There are also slight, recurring irregularites in the movement of the Earth which also introduce potential errors. Another potential correction allows for sighting to be made on the centre or the edge of the Sun. The navigation tables provide compensation for most of these errors.


1733 French soldier, diplomat and chemist Charles-Francois de Cisternay du Fay discovered two types of electrical charge, positive and negative which he called "vitreous" and "resinous" from the materials used to generate the charge.


1733 John Kay of Bury, Lancashire (No relation to John Kay of Warrington) patented the flying shuttle, the device used in weaving looms, which carries the weft threads (across the width of the cloth) between the warp threads (along the length of the cloth). In a traditional hand loom, the weft thread was held in a natural reed which was propelled by hand across the loom between the warp threads pulling the weft behind it along a track called the race. It was a slow process and to produce wide bolts of cloth, it needed two weavers, one at each side of the loom to catch and return the shuttle. In Kay's system, a mechanism at each end of the race caught the shuttle and sent it back to the opposite side. The shuttle itself was made of metal and being heavier than the reed it gave the shuttle more inertia to traverse the loom. This system enabled much faster weaving speeds and the production of greater widths of cloth with only one operator per loom instead of two as well as reduced manual intervention in the process.


The introduction of flying shuttle was however perceived as a threat to their livelihood by textile workers who resisted its introduction and Kay had great difficulty in collecting the royalties on his patents.

On the positive side, the increased production of cloth created a demand for thread which exceeded the industry's production capacity, prompting the mechanisation of the thread spinning process.


The invention of the flying shuttle was one of the first examples of mechanisation being used to improve productivity and a significant first step in the Industrial Revolution.


1733 French Huguenot mathematician, Abraham de Moivre living in England to escape religious persecution in Catholic France derived and published the formula for the Normal Distribution which he used to analyse the magnitude and the probability distribution of errors. Also called the Bell Curve and the Gaussian or error distribution but strangely never by de Moivre's name, besides describing the distribution of measurement errors it is widely used to represent the distribution of characteristics which cluster round a mean value such as the spread of tolerances on manufactured parts to anthropometrical and sociological data about the general population. See diagram of the Normal Distribution.


De Moivre also derived a law relating trigonometry to complex numbers which was indeed named after him. It states that for any complex number and for any real number X and integer n it holds that:

(cosx + i sinx)n = cos(nx) + i sin(nx)

He supplemented his meagre income as a mathematics tutor with a little gambling and the publication of his book The Doctrine of Chances: a method of calculating the probabilities of events in play one of the first books about probability theory which ran into four editions between 1711 and 1756.


1738 Swiss mathematician Daniel Bernoulli showed that Newton's Laws apply to fluids as well as solids and that as the velocity of a fluid increases, the pressure decreases, a statement known as the Bernoulli principle.

More generally the Bernoulli Equation is a statement of the conservation of energy in a form useful for solving problems involving fluid mechanics or fluid flow. For a non-viscous, incompressible fluid in steady flow, the sum of pressure, potential and kinetic energies per unit volume is constant at any point.

Bernoulli's equation also underpins the theory of flight. Lift is created because air passing over the top of the wing must travel further and hence faster that air traveling the shorter distance under the wing. This results in a lower pressure above the wing than below the wing and this pressure difference creates the lift.


See also Diagrams of Aerodynamic Lift and Alternative Theories of Flight


Daniel Bernoulli was also the first to explain that the pressure exerted by a gas on the walls of its container is the sum of the many collisions by individual molecules, all moving independently of each other - the basis of the gas laws and the modern kinetic theory of gases.


Daniel Bernoulli was a member of a family of Bernoullis many of whom gained international distinction in mathematics. They were Calvinists of Dutch origin but were driven from Holland by religious persecution finally settling at Basel in Switzerland.


James (Jacques/Jakob) Bernoulli was the first to come to prominence. He learned about calculus from Leibniz and was one of the first users and promoters of the technique. In his Ars Conjectandi, "The Conjectural Arts" published in 1713, eight years after his death by his nephew Nicholas Bernoulli, he established the principles of the calculus of probabilities - the foundation of probability theory as well as the principles of permutations and combinations. He was also one of the first to use polar coordinates.


John (Jean/Johann) Bernoulli, James' brother and father of Daniel was clever but unscrupulous, fraudulently substituting the work of his brother James, of whom he was jealous, for his own to cover up his errors. He also banished his son Daniel from his home when he was awarded an prize he himself had expected to win. Nevertheless he was a great teacher an advanced the theory of calculus to explore the properties of exponential and other functions.


John's three sons Nicholas, Daniel and John Bernoulli the younger and his two sons John and James all achieved distinction in mathematics in their own right.


1740 British clockmaker Benjamin Huntsman, in search of spring steel for his clock making business, developed the crucible steel process to improve the quality of conventional blister steel which was not uniform and often contained slag and structural dislocations which made it unsuitable for high stress applications. Blister steel, the best quality steel available at the time, was derived from wrought Iron, using the cementation process, and had never been in a fully liquid state.

Huntsman's solution was to refine the blister steel by melting it and skimming off the slag to produce homogeneous molten steel which could be poured into moulds to produce high strength, pure cast steel ingots. He chose Sheffield as the location for his business since it had a plentiful supply of good quality coke which was the fuel needed to achieve the very high temperature necessary to melt the steel. Such high temperatures and fine controls had never before been achieved in a practically sized furnaces.

His process involved heating a 34 pound (15 kg) charge of small pieces of blister steel together with a limestone flux to over 1600°C in small covered refractory vessels (fireclay pots) called crucibles for three hours in a coke fire to melt the steel. The crucibles had to be robust enough to withstand the very high furnace temperatures and the ceramic material from which they were constructed should not contaminate the melted steel.

This process eliminated the defects from the steel and, after casting, produced a homogeneous, high tensile strength, high quality steel. The crucible operation required very precise control of the furnace but the small scale of the operation also allowed more precise control of the process than was possible with a large blast furnace. It also allowed other alloying materials to be added to the mix to make specialist steels to precise specifications but the method was slow and labour intensive and only suitable for making small batches. Fuel costs were also very high. After 1870, the coke fired furnaces were replaced by gas fired furnaces.

Huntsman's crucible steel set new standards for the quality of steel. Key to his success were the design and manufacture of the crucibles, the high temperature furnaces and the control of the content of the steel charge, all of which he kept a closely guarded secret.


See also Iron and Steel Making.


1744 Prolific French inventor Jacques de Vaucanson maker of robot devices and automatons playing musical instruments and imitating the movements of birds and animals, turned his attention to the problems of mechanisation of silk weaving. Building on the inventions of Bouchon and Falcon, he built a fully automated loom which used perforated cards to control the weaving of patterns in the cloth. Vaucanson also invented many machine tools and collected others which became the foundation of the 1794 Conservatoire des Arts et Métiers (Conservatory of Arts and Trades) collection in Paris. Although Vaucanson's loom was ignored during his lifetime, it was rediscovered more than a half century later at the Conservatoire by Jacquard who used it as the basis for his own improved design.


1745 Electricity first stored in a bottle (literally). The discovery of the Leyden Jar, essentially a large capacitor, was claimed by various experimenters but generally attributed to a Dutch physicist and mathematician Pieter van Musschenbroek and his student Andreas Cunaeus (whom he almost electrocuted with it) working at Leyden University in Holland. The first source of stored electrical energy the Leyden jar was simply a jar filled with water, with metal foil around the outside and a nail piercing the stopper and dipping into the water.

A similar device was also invented at the same time by Ewald Jurgens von Kleist, Dean of the Cathedral of Kammin in Germany.

The design was improved in 1747 by English astronomer John Bevis who replaced the water with an inner metal coating covering the bottom and sides nearly to the neck. A brass rod terminating in an external knob passed through a wooden stopper or cork and was connected to the inner coating by a loose chain or wire.


The invention of the Leyden jar was a key development in the eighteenth century and until the advent of the battery, Leyden jars, together with von Guericke's and Hauksbee's electrostatic generators, were the experimenters' only source of electrical energy. They were however not only made for scientific research, but also as curiosities for amusement. In the 18th century, everybody who had heard of it wanted to experience an electric shock. Experiments like the "electric kiss" were a salon pastime.


1746 French clergyman and physicist Jean Antoine Nollet demonstrated that electricity could be transmitted instantaneously over great distances suggesting that communications could be sent by electricity much faster than a human messenger could carry them.

With the connivance of the Abbot of the Grand Convent of the Carthusians in Paris he assembled 200 monks in a long snaking line with each monk holding the ends of eight metre long wires to form a chain about one mile long. Without warning he connected a Leyden Jar to the ends of the line giving the unsuspecting monks a powerful electric shock and noted with satisfaction that all the monks started swearing and contorting, reacting simultaneously to the shock. A second demonstration was performed at Versailles for King Louis XV, this time by sending current through a chain of 180 Royal Guards since by now the monks were less than cooperative. The King was both impressed and amused as the soldiers all jumped simultaneously when the circuit was completed.


1746 English mathematician and scientist, Benjamin Robins, constructed a whirling arm apparatus to conduct experiments in aerodynamics. He attached a horizontal arm to a vertical pole, which he rotated, causing the arm to spin in a circle. A variety of objects were attached to the end of the rotating arm and spun at high speed through the air. His tests confirmed that the size, the shape and the orientation of the objects had a tremendous effect on air resistance and the drag they experienced. This idea was subsequently picked up and used by others such as Smeaton who used it to derive the aerodynamic lift equation.


1747 - 1753 Fabulously wealthy, eccentric English loner Henry Cavendish discovered the concept of electric potential, that the Inverse Square Law applied to the force between electric charges, that the capacity of a condenser depends on the substance between the plates (the dielectric) and that the potential across a conductor is proportional to the current through it (Ohm's Law).

Charge was provided by Leyden Jars. Potential was "measured" by observing the deflection of the two gold leaves of an electrometer but since no instruments for the measurement of electric current existed at the time, Cavendish simply shocked himself, and estimated the current on the basis of the extent and magnitude of the resulting pain.

Cavendish also analysed the puzzle of the Torpedo fish which seemed to give an electric shock which was not accompanied by a spark. At that time the presence of a spark was considered to be an essential property of electricity. He was the first to make the distinction between, the amount of electricity (its charge), now called Coulombs, and its intensity (its potential difference), now called Volts. He showed that the fish produced the same kind of electricity as produced by an electrostatic generator or stored in a Leyden jar, but the electricity from the fish was high charge with low voltage whereas the electricity from a typical Leyden jar was high voltage with a low charge. This was because the fish's electric charge was generated by a multitude of gelatinous plates, each providing a small charge, connected together is series and parallel combinations as in the cells of a battery, to increase the potential difference and charge capacity respectively. We now know that the fish can generate a voltage of about 250 Volts while the voltage on the Leyden jar could typically be ten times that.


Cavendish recorded all his experiments in notebooks and manuscripts but published very little, principally the results of the chemical experiments which formed the bulk of his work. It was therefore left to Coulomb (1785), Ohm (1827) and Faraday (1837) to rediscover these laws many years afterwards. His papers were discovered over a century later by James Clerk Maxwell who annotated and published them in 1879.


Cavendish's family endowed the Cambridge University Cavendish Laboratories at which many of the world's discoveries in the field of nuclear physics were made.


1747 British physicist Sir William Watson, Bishop of Landaff, ran a wire on insulators across Westminster Bridge over the Thames to a point across the river over 12,000 feet away. Using an earth or ground return through the river. He was able to send a charge sufficiently intense after passing through three people to ignite spirits of wine. Watson was probably the first man to use ground conduction of electricity, though he may not have been aware of its significance at the time. Watson was the first to recognise that a discharge of static electricity is equivalent to an electric current.


1748 Watson uses an electrostatic machine and a vacuum pump to make a glow discharge lamp. His glass vessel was three feet long and three inches in diameter. The first fluorescent light bulb.


1748 To carry out measurements with less risk of electrocution of the experimenter or dragooned assistants Nollet invented one of the first electrometers, the electroscope, which detected the presence of electric charge by using electrostatic attraction and repulsion between two pieces of metallic foil, usually gold leaf, mounted on a conducting rod which is insulated from its surroundings. The first voltmeters.


1748 Swiss mathematician and physicist Leonhard Euler produced this remarkable formula:

eix = cos(x) + i sin(x)

where i = √-1

and e = 2.1828 the base of the natural logarithm, now known as Euler's number.

In the special case where x = π,     then cos(π) = -1 and sin(π) = 0

and Euler's formula reduces to:

ei π = -1

Euler had thus discovered a simple and surprising relationship between three mathematical constants.


Among his many other accomplishments, Euler developed equations for calculating the power and torque developed by hydraulic turbines.


The following are some key developments in hydraulic power technology.


  • Hydropower has been used since ancient times for turning mill wheels in flour mills grinding grain. Its earliest form was the familiar wooden water wheel, often called the Vitruvius wheel after Roman military engineer Vitruvius who in around 15 B.C first described it in detail. This was a vertical wheel rotating on a horizontal axis perpendicular to the water flow so that the water impinged tangentially to the wheel on flat blades attached to its periphery causing it to turn.

  • The simplest design was the undershot wheel in which the lower part of the wheel dipped into a moving stream and the water impinging on the flat blades or paddles caused the wheel to turn. To turn the horizontal mill stones, the waterwheel had to be coupled to the vertical shaft of the stones via a wooden right-angle gear drive. Undershot wheels are suitable for use in shallow streams but their efficiency is very low, between 5% in the worst case and up to 22% as later calculated by John Smeaton.

    The efficiency was improved in the overshot wheel in which water was fed from above via a chute, or penstock which could control the flow, on to the wheel near the top of its cycle, just past its highest point. Instead of flat blades, the overshot wheel had a series of fixed buckets mounted around its circumference. In action, the weight of the water filled buckets on down side of the wheel compared with the weight of the empty buckets on the up side of the wheel created an unbalanced torque on the wheel causing it to turn. The orientation of the fixed buckets gradually changed as the water wheel rotated through its cycle and the water was discharged as the buckets approached their lowest point and entered their up cycle when the buckets were upside down. The overshot wheel has the double advantage of gravity providing the turning force as well as, to a lesser extent, the momentum of the water. Efficiencies could be as high as 63%.


    Since Roman times a huge variety of water wheels and turbines have been developed to work in a wide range of operating conditions such as high speed low volume, and low speed high volume water flows and intermittent, variable and bi-directional flows as well as systems fully or partially immersed in the water. Practical systems however must be supported by a variety of ancillary control equipment to accommodate fluctuating water supplies and to match them to irregular mechanical or electrical loads and custom power take-off arrangements.


    Water wheels and turbines derive their torque from the change in momentum (mv) of the water flow by changing either the speed, direction, pressure or weight of the flow.

    Impulse turbines obtain their torque by changing the direction of the water flow. They normally operate in air or only partially submerged.

    Reaction turbines develop torque from accelerating water flows between the turbine blades causing pressure differentials. They normally operate fully submerged or encased to contain the water pressure.

    See more about water turbines on the Hydroelectric Power page.


  • 1759 English engineer John Smeaton developed a method of calculating hydraulic efficiencies based on models. He designed several Vitruvian style water wheel installations and was the first to use cast iron wheels and gearing. This was around the start of the industrial revolution and water wheels were beginning to be used for powering machinery and percussion tools but ten years later Watt's steam engine also became available to fulfil that role. Subsequently, most development of hydropower took place in countries with ample, constant and reliable hydro sources such as France and the USA, whereas the development of steam power was pursued more in countries lacking those resources such as the UK.

  • 1767 French inventor Chevalier da Borda analysed the undershot water wheel and proposed that by using a curved blade design it would enable the water to pass through the wheel with minimum turbulence and would therefore reduce losses and hence improve efficiency.

  • 1824 In an attempt to capture the maximum energy from the water wheel French mathematics teacher at the Ecole des Mines, Claude Burdin, expanded on da Borda's idea and published "Hydraulic Turbines" and proposed that the maximum efficiency could be achieved with a water flow parallel to, rather than perpendicular to, the axis of the wheel in a configuration known as axial flow. He pointed out however that using heavily curved blades in an attempt to achieve maximum efficiency would direct the exhaust water flow against the back of the following blade thus slowing it down, while alternatively directing the exhaust downwards allowed the water to leave with comparatively high velocity resulting in less energy being extracted from the water flow. While the factors affecting efficiency were better understood, designing a practical turbine was still a problem.

  • Burdin coined the word "turbine" which he took from the Latin "turbo" meaning a vortex or spinning. The array of blades mounted on the rotating shaft of the turbine is called the "runner".


  • 1827 At the age of 25, French engineer Benoît Fourneyron, a pupil of Burdin, solved many of these efficiency problems with his design for a turbine, capable of producing around 6 horsepower (4.5 kW). It was a horizontal (vertical shaft) radial flow device with the water flowing outwards from the centre using two sets of blades or vanes curved in opposite directions, a fixed set which he called a distributor, also known as wicket gates, which directed the water flow at the optimum angle on to the rotating runner blades. Since the Fourneyron turbine reacts to the pressure on the runner it is classified as a reaction turbine.
  • It was the world's first commercial hydraulic turbine and proved highly successful. Within a few years, hundreds of factories used Fourneyron-style turbines. By 1837, he had produced a 60 hp (45 kW) turbine operating at 2,300 r.p.m. with an efficiency of 80% weighing only 40 pounds. In 1895 Fourneyron-type turbines, designed by Faesch and Piccard of Geneva, were installed in the world's first hydroelectric AC generating station at Niagara Falls coupled to Westinghouse electric generators. See also the Current Wars.


  • 1844 American civil and mechanical engineer Uriah Atherton Boyden made efficiency improvements to early Fourneyron turbines by optimising the passages of the input and exhaust water flows achieving 78% efficiency.

  • 1846 Belfast born James Thomson, elder brother of Lord Kelvin, designed the Vortex inward radial flow reaction turbine which he patented in 1850. Similar to the Francis turbine (see next), water entered around the circumference of a vertical shaft runner and was directed through coupled, moveable (pivoted), curved guide vanes on to curved runner blades to enable optimum performance with different flow rates. It was compact and could work with water heads as low as 3 feet (1 m). His first model turbine, produced in 1847, delivered 0.1 hp (75 W) with an efficiency of 70%. Later models achieved 75% efficiency.
  • A Vortex turbine was used in 1878 by William Armstrong to power the world's first hydroelectric power installation at Cragside in the UK.


  • 1849, British born, American James B Francis, chief engineer of the Lowell, Locks and Canals Company, friend of Uriah Boyden, developed the first modern water turbine – the Francis turbine. He made major improvements to Fourneyron's design achieving efficiencies of 90%. Like Thomson's Vortex turbine it was an inward radial flow design, rather than Boyden's outward flow design, but it also included an element of axial flow so that water entered radially and exited axially (now called a mixed flow design). For this it used deeper blades, curved around two axes at right angles to each other. Water was distributed around the circumference of the runner in a spiral casing with reducing diameter to ensure uniform velocity of entry to the blades. Curved stationary guide vanes and shaped rotor vanes ensured that water entered the runner shock and turbulence-free at the correct angle. The runner blades like many reaction turbines were shaped like aerofoils so that the water flow created a greater pressure on one side of the blades than on the other creating a reaction force which caused the runner to rotate. The blades also had a bucket-like curve towards the turbine outlet so that the water impinging on this surface provided an added kick or impulse to the blades before leaving the runner.
  • The Francis turbine operates under a wide range of conditions and remains the most widely used large water turbine in the world today with about 60% of all high power installations.


  • 1851 French engineer Louis Dominique Girard introduced the Girard axial flow impulse turbine. It is comprised of an array of small curved plates arranged in an annular ring around the periphery of a large diameter flat turbine wheel or runner. Water was directed at right angles to the wheel through these moving vanes via a series of fixed, curved vanes in two diametrically opposite quadrants. Very high speeds were possible.

  • 1870s, American inventor Lester Allan Pelton developed the Pelton wheel, an impulse water turbine, which he patented in 1880. Tangential jets of water impinge on pairs of buckets mounted side by side around the circumference of a small wheel. The buckets split the water jet into two equal streams which emerge from opposite sides of the wheel, balancing the side-load forces on the wheel. The curved profile of the buckets ensures smooth water flow maximising the energy capture from the stream. The Pelton turbine is a simple and efficient design which needs only a small water flow and can operate with very high water heads at very high speeds.

  • 1913 Austrian civil engineer Viktor Kaplan developed the Kaplan turbine, a propeller-type turbine with adjustable runner blades as well as adjustable wicket gates directing the water flow for which he received four patents. The machine's variable geometry enabled fine control over the water flow and high efficiencies to be achieved over a wide range of water flows and pressure heads.

See also Steam Turbines.


1750 to 1850 The Industrial Revolution

In the period from around 1750 to around 1850 a series of technical innovations took place in Britain, each one with the simple aim of solving a particular problem or of doing things more efficiently, each one creating yet more opportunities for innovation. The way forward was shown by the development of rudimentary machines to improve productivity by mechanising manual work. The advent of the steam engine raised the potential of this mechanisation to a much greater level. The following were some key developments:

  • (1701) Jethro Tull's seed drill, an early example of mechanisation revolutionised British agriculture.
  • (1709) Abraham Darby's mass production of cast and wrought iron provided the essential materials for building industrial tools and machines.
  • (1712) Thomas Newcomen invented the first practical steam engine which was first used for pumping water out of mines, but with further developments became the workhorse of the industrial revolution.
  • (1733) John Kay's hand operated flying shuttle brought mechanisation to the weaving industry.
  • (1737) John Harrison's marine chronometer, the first method to successfully determine longitude, completed its sea trials.
  • (1759) Josiah Wedgwood founded his pottery factory. He used mass production techniques coupled with scientific method to determine precise controls on the composition of the glazes, the temperatures of the kilns and the glazing process to produce high quality ceramics. (A typical example of the possibilities of mechanised production of ceramic products. Not unique to Wedgwood). Wedgwood was instrumental in commissioning and funding Brindley's Trent and Mersey Canal which secured supplies for his potteries. He was also a pioneer in marketing and advertising, one of the first to open showrooms to display his products and to make skilful use of royal patronage to promote and sell them.
  • (1761) James Brindley extended the British canal system creating a national network facilitating the easier and more economical movement of goods.
  • (1764) James Hargreaves' spinning jenny, powered by hand, brought further mechanisation to the textile industry
  • (1765) Matthew Boulton introduced the factory system to the metalworking industry and provided social security for his employees.
  • (1769) James Watt greatly improved the efficiency of steam engines improving the economic viability of steam power.
  • (1771) Richard Arkwright developed much larger machine driven spinning frames which he installed at Cromford Mill where he pioneered the factory system of production in the spinning industry.
  • (1777-1779) Thomas Pritchard designed, and Abraham Darby III built the World's first iron bridge at Coalbrookdale in Shropshire.
  • (1779) Samuel Crompton invented the spinning mule which could produce a wide range of high quality fine yarns.
  • (1783) Henry Cort improved the processes of steelmaking and forging by means of puddling and rolling mills, reducing the cost of steel and increasing its potential applications.
  • (1786) Matthew Boulton applied steam power to coining machines to manufacture coins for the mint. (A typical example of the possibilities of mechanised production of metal parts. Not unique to Boulton)
  • (1792) Willam Murdoch invented, but sadly did not patent, domestic gas lighting.
  • (1794) Eli Whitney in the USA invented the cotton gin which revolutionised the processing of raw cotton.
  • (1797) Henry Maudslay and James Nasmyth developed precision machine tools while Eli Whitney pioneered manufacturing using interchangeable parts.
  • (1804) Richard Trevithick developed a high pressure steam engine and used it to power a steam powered road vehicle.
  • (1825) George Stephenson opened the world's first public railway initiating a rapid improvement in the country's transport infrastructure.
  • (1827) Benoît Fourneyron developed the first practical water turbine enabling exploitation of low cost water resources, where available, for industrial mechanisation.
  • (1837) William Cooke and Charles Wheatstone patented the first two way electric telegraph communications.
  • (1853) George Cayley published the theory of flight and launched the first manned glider.
  • (1855) Henry Bessemer introduced mass production to steelmaking, lowering steel's cost and increasing its strength, dramatically increasing its use.

Taken together these innovations had a profound and unprecedented affect on society and social, economic and cultural conditions.


Though not fully exploited at the time, several important discoveries were also made towards the end of the period, which laid the ground work for a second wave of innovation based on electrical communications, electric power, computers and household appliances. These were;


What were the results of all of this innovation?

Production methods were mechanised reducing costs and the steam engine enabled factories to use very large machines to achieve even greater levels of mechanisation reducing costs even further. The new transport infrastructure created by the canals and later by the railways made it cheaper and easier to access lower cost supplies of raw materials as well as giving access to new markets for the products produced by the factories. Manufacturing activities which had previously not been economically viable suddenly became possible. New employment opportunities were created with jobs that previously didn't exist such as engineers, draughtsmen, machine builders, tool makers, managers, book keepers and salesmen and with these jobs came the possibility of social mobility. Overall, incomes rose and were more regular and secure. The cost of manufactured goods was reduced creating more demand as well as employment opportunities. More manufactured goods were available and there was a sustained increase in the economic well being of the country.


But there were consequences of these developments. Cottage industries could not compete with mechanised factories and went out of business. The demand for craftsmen, proud of their skills and workmanship, was replaced by the demand for unskilled factory workers to operate machines and to assemble the products. The result was that there was a movement of the rural population towards the towns whose infrastructure was not ready for it. At the same time, in a minor way, the employment of people involved in administering or trading with the growing British Empire, as well as the increasing life expectancy brought about later by developments in medical science, also contributed to the population growth. The population of London alone grew from around one million in 1800 to around two million in 1840 making it the largest city in the world at the time and it continued to over six million in 1900.

Unfortunately the city's infrastructure did not keep pace with this growth. Living conditions were consequently overcrowded, unhealthy and far from ideal until much needed improvements were developed.


Public Health Challenges and Medical Advances During the Industrial Revolution


Background - A Death Sentence ?

In the early 19th century the conditions in hospitals were gruesome. Medical science was in its infancy and there was little understanding of the causes of illness and disease and scant knowledge, if any at all, about potential treatments for ailments and traumas. In the absence of any valid theory of bacterial infection, facilities for washing the surgeon's hands or a patient's wounds and for ventilation of the wards were not considered necessary, surgeons, nurses and other staff paid little attention to hygene, hands clothes and instruments were rarely cleaned between operations and were often contaminated with blood and pus as were their clothes whose stains were often regarded as badges of honour displaying their experience. The consequences were that hospitals were filthy places, reeking of urine, vomit, and other bodily fluids where surgery was practised under dangerous unsanitary conditions.

Patients unfortunate enough to be treated in hospital were subjected to exposure to high levels infection during and after their operations.and typically had only about a 50 pecent chance of emerging alive from the hospital. The mortality rate after thigh amputations ranged between 45 and 65 percent but It wasn't just patients arriving with open wounds, compound fractures and similar traumas which were particularly vulnerable to the ingress of germs through their wounds. All patients undergoing surgery for any reason were also subject to the same hazards of infection through surgical incisions in their flesh during surgery which also led to significant loss of life.


Antiseptics

Fortunately, along with the other new technologies being developed during the Industrial Revolution, medical knowledge and practice were also improving, giving rise to the invention of a series of antiseptics and anaesthetics which dramatically improved the outcomes of medical interventions. Equally important was the recognition of the importance of cleanliness and the steps adopted to ensure its implementation. By 1865 the implementation of these measures radically reduced the number of post operative deaths and no longer was a stay in hospital considered to be a death sentence.

During this same period, similar advances in public health were also achieved through improvements in the quality of in the water supplies and public sanitation. See The Great Stink


Between 1830 and 1834, German polymath and industrialist Carl Ludwig von Reichenbach, member of the Prussian Academy of Sciences and head of several chemical and iron works and factories carried out large scale experimental research projects. Amongst other things he carried out fractional distillation or pyrolysis (destructive distillation) of organic substances such as coal and wood tar and other organic mixtures to separate them into their component parts, discovering numerous valuable hydrocarbon compounds in the process. These included creosote (a preservative and disinfectant), paraffin (a fuel and lubricant), phenol also called carbolic acid (an antiseptic), pittacal (a lubricant), cidreret (used in synthetic dyestuffs), picamar (a base for perfumes) and many others.

  • In 1832 he found that the destructive distillation (pyrolysis) of wood tar produced three products: 'illuminating gas' (hydrogen and methane), charcoal and a dense liquid distillate containing turpentine and a dark, acidic, viscous oil with a smell of preserved smoked beef. Investigating further, he soaked a meat sample in a dilute solution of distilled creosote for half an hour, dried it in the sun, and examined it eight days later. He found that the meat had developed a smoky flavour and did not undergo putrefaction. He called the viscous oil Kreosote (creosote) -- from the Greek words for flesh and preserver”. Some also called it wood vinegar. He reasoned that the acquired a smoky flavour.indicated that creosote was the antiseptic component contained in smoke, He later discovered that this viscous oil also contained various other organic compounds. Intrigued by its apparent preservative and potential disinfectant capacity, Reichenbach engaged the services of a country surgeon and an elderly pharmacist to test the efficacy of creosote in treating various medical conditions.
  • In 1933 they provided him with 25 clinical reports outlining its curative properties including burns, wounds, ulcers, gangrene, scabies, and other conditions. He later found a more abundant source of creosote in coal tar.

Footnotes

  • Fractional distillation is the separation of the individual chemical compounds from complex organic products by heating the mixture in separate stages to the unique temperatures which correspond to the temperature at which the individual components of the mixture will vaporise i.e. their boiling points.
  • Pyrolysis, or destructive distillation, is the irreversible chemical change caused by the action of heat in the absence of oxygen. Biomass pyrolysis is usually conducted at or above 500 °C, providing enough heat to deconstruct the strong bio-polymers.
  • Coal tar contains over 300 types of different chemical compounds many of which can be separated by fractional distillation or pyrolysis.
  • Creosote is a category of carbonaceous chemicals formed by the distillation of various tars and pyrolysis of plant-derived material, such as wood, or fossil fuel. It was typically mainly used as a preservative on wood used for railway sleepers and ships since it had been found to protect the wood from rotting. At the time it also found use as an antiseptic.

In 1834, Marcellin Berthelot, a French organic chemist, described 12 clinical cases treated topically with dilute solutions of creosote These cases included cuts, ulcers, skin eruptions, burns, ear infections, etc. In ten cases the pain was reduced, in seven the pus dried up, and in four the lesions healed without the discharge oif pus.

In 1834 Carbolic acid (now known as phenol) was first extracted (in impure form) directly from coal tar by German chemist Friedlieb Ferdinand Runge and independently by French chemist Auguste Laurent. It can also be extracted a lighter distillate of creosote produced in the second fraction of distillation. Runge called it "Karbolsäure" (coal-oil-acid, carbolic acid) and and noted that, like creosote, it preserved meat.

Reichenbach was concerned about priority of discovery and asserted that Runge had merely found his flesh preserving creosote which he claimed was the active chemical in the distillate. Despite evidence to the contrary, at the time Reichenbach's view prevailed with most chemists, and it became commonly accepted wisdom that, carbolic acid, and phenylhydrate acid (another distillate of coal tar) were identical substances, with different degrees of purity.Nevertheless a number of scientists recognised the efficacy of carbolic acid in preventing decay and neutralising the stench of dead animals and bodies.

Coal tar remained the primary source of carbolic acid until the development of the petrochemical industry until 1841 when Laurent isolated pure carbolic acid (phenol) in crystalline form as a derivative of benzene. He noted that it was different from creosote in which it was the active ingredient, and that creosote is in fact a mixture of phenol and several phenol derivatives as well as other distillates which he had also identified.

Laurent was already famous for devising the method of classifying organic compounds based on the number of carbon atoms they contain and the three dimensional crystal structure of their molecules which he followed up with an anaysis of their chemical properties.

At the time, like Runge's experience, Laurent's discovery attracted little immediate clinical interest among French doctors.

In 1836 Runge was supported by John Rose Cormack a Physician and medical journalist from Edinburgh who also sought to collect all the information about creosote from foreign and British journals and found that treatment with creosote reduced the discharge of pus from burns, promoted healing (scar formation) of wounds, arrested hemorrhages from capillaries, gave relief from tooth aches, and provided relief from pain in cancers and other conditions.

Similarly, French doctor and pharmacist Jules Lemaire was one of the first to recognise the antiseptic properties and benefits of carbolic acid (phenol). He used it to treat local skin infections. More generally he recommended its use after surgery to stop infections developing, or to deal with infections once they had developed.and later wrote extensively to describe and promote its surgical applications which were published in 1860.

For many years carbolic acid was the prime anticeptic used in the medical profession.


Sanitation

In 1847, when a pathologist colleague of young Hungarian doctor Ignaz Semmelweis working at Vienna General Hospital, died after suffering an accidental knife wound during an autopsy he was carrying out, Semmelweis observed the pathologist's symptoms and realised that the pathologist had died from the same infection as the dead patient whose autopsy he had been carrying out. Although the real cause of his death was not known, since the existence of germs was not yet proven, he argued that some form of "cadaverous particles" had been transferred, by the surgeon's contaminated knife, from the patient to his colleague. He concluded that the unsanitary conditions in the hospital were responsible for the high incidence of infections and consequently ordered a rigorous cleansing regime of hand washing and the sterilising of instruments and dressings to be established in his clinic to destroy or eliminate microbes. As a result, the death rates in his unclean clinic quickly dropped to the levels of a second neighbouring "normal" clinic and therafter the death rates in both clinics continued to fall though more slowly. His conclusion was that the infection had been transferred by the "cadaverous particles". It was however not accepted by his local contemporary surgeons who refused to acknowledge any blame or criticism of their methods and the implication that they had been responsible for their patient's deaths. They therefore banished hand washing. Consequently Dr Semmelweis was forced out of his job and the death rates returned to their previous levels. Twenty years later Dr Semmelweis died in a mental asylum, an outcast from the local medical community.

Nevertheless the importance of personal hgiene, cleanliness and sanitation were eventually recognised by others and ultimately applied throughout the medical profession.


In 1851 Frederick Crace Calvert, professor of chemistry at the Royal Manchester Institution, investigating the properties of carbolic acid (phenol), injected cadavers with solutions of it which prevented them from deteriorating for three to four weeks.

In 1854, together with fellow Manchester based chemical engineers Alexander McDougall, and Angus Smith, working independently on disinfectants, Calvert promoted the use carbolic acid, derived from creosote, as an antiseptic and supplied several Manchester surgeons with samples for therapeutic trials,

In 1857, pure carbolic acid was first produced commercially in Britain by Calvert who supplied it in powdered form as a deodorising agent to the Carlisle sewage works, who were already using creosote to reduce the odours from their cesspits. Not only did it prevent odour from the local fields which had been irrigated with sewage, but it as also claimed to have reduced the occurence of parasites infesting cattle grazing on the land.

In 1860, McDougall, who by then was manager of the Carlisle sewage works, reported that the vapours. emanating from the putrescent state of the land were obviated by the use of a solution of carbolic acid. A paper describing his findings was read to the Academe des Sciences by a French member in 1859 and in 1863 Calvert also published a report in The Lancet entitled "On the Therapeutic Properties of Carbolic Acid". As a result, carbolic acid was also adopted by other municipal sewage works across the UK to treat their effluent.

Carbolic acid was also used as a disinfectant in soaps and powders and for making dyes.


Since 1849, London doctor John Snow had been investigating the spread of cholera in London and had determined that it was a water-borne disease carried by germs. See more about Snow's innovative investigations.


In 1860 however, many people still believed that infections in wounds were due to chemical damage from exposure to noxious vapours that they called "bad air", or miasma. That year English physician and experimental contrary to public opinion, it was spread by water-borne germs and that a clean water supply was essential for preventing disease. See pathologist Joseph Lister was appointed Regius Professor of Clinical Surgery in the University of Glasgow. In his time, a compound or open fracture usually progressed through sepsis (the chain reaction caused by presence of pus-forming bacreria in the body) to death, unless the limb was amputated. Initially he had no conception, nor indeed did anybody else, of the vast number of types of germs that existed in nature. Despite this, Lister developed and introduced new principles of antiseptics which transformed surgical practice by the late 1800s. By showing that germs were the source of all infection and could be treated with antiseptics Lister changed the practice of medicine forever.

In 1846, as a student, Lister had been inspired to a career in surgery after attending the first public demonstration in England of the use of an anaesthetic, namely ether, by Robert Liston, an Edinburgh surgeon just weeks after it had first been successfully demonstrated in the USA by a Boston dentist William Morton. Lister was impressed by the patient's loss of sensation and by Liston's renowned operative speed and dexterity which had been made possible due to the calming affects of the anaesthetic.

In 1853, recently qualified, Lister was appointed House Surgeon at Edinburgh's Royal Infirmary where he responsible for physiology and pathology and carried out research on animal tissues, blood circulation, the nervous system and the nature of inflammation which he later considered to be an "essential preliminary" to his conception of the principle of germ theory.

In 1860 Lister moved to Glasgow where he was appointed Regius Professor of Clinical Surgery at the University where he continued his epidemiological research, looking for chemicals that might kill infectious micro-organisms.

The following year he was put in of charge of the Male Accident Ward with the objective of reducing the high death rate due to post operative infections. Recognising that patients were being killed by germs, Lister theorised that if germs could be killed or prevented from entering the body, no infection would occur. He conceived ways of preventing surgical infections (sepsis) by destroying the micro-organisms that caused it by chemical means or by preventing such germs from entering a wound in the first place, either directly or by creating a chemical barrier, which he called an antiseptic, between the surgical wound and the surroundings.

He carried out clinical trials to verify his antiseptic theory, regularly publishing his findings. but reception of his theory was mixed and he was widely mocked for his belief in "invisible" germs. Because many surgeons didn't yet accept that germs, but not chemicals, caused infections, they found the antiseptic system excessive and unnecessarily complicated. Some thought that Lister was claiming carbolic acid as a cure for infections, not as one way to prevent them.

In 1864 he became aware of Pasteur's Germ Theory of Disease, published in 1861, and specifically its findings that the processes of fermentation and putrefaction were not caused by noxious gases in the air but by small living "corpuscles” or germs and that putrefaction could be prevented by excluding such germs from the tissues concerned. He was encouraged by discovering that this was consistent with his thoughts and early amateur investigations as a student with a microscope which had revealed the teeming world of micro-organisms.


Pasteur had suggested three methods to eliminate the ubiquitous infectious micro-organisms: filtration, exposure to heat, or exposure to chemical solutions.

  • An obvious starting point was to banish dangerous filth from the operating theatres, as recommended by Semmelweis, by sterilising the surgical instruments, washing hands and clothes and eliminating all rubbish and bodily waste from the surfaces and keeping the patient's wounds clean.
  • The next step was to find practical ways to eliminate the germs. Since the first two methods suggested by Pasteur, heat and filtration, were unsuitable for the treatment of human tissue, Lister explored the chemical method. His challenge was to find suitable chemical method for killing the germs. The answer came from an unexpected source.

In 1865 After hearing that creosote had been used for neutralising the foul smell of sewage at the nearby Carlisle sewage works and that it had a therapeutic on local cattle reducing their parasites, Lister obtained a sample from the sewage works for investigation. Known as "German creosote", it was thick, smelly, tarry substance, and almost insoluble in water. It was far from ideal and irritated the patient's skin causing ulcers followed by suppuration (the discharge of pus from the wound). Looking further into the problem he confirmed that carbolic acid, which was known to kill germs on contact, was also known to be the active ingredient in the creosote used at Carlisle. He therefore obtained pure samples, from Manchester chemistry professor Calvert which he used for a series of clinical trials. He determined that, by applying a solution of carbolic acid directly to the wounds, the surgeon's instruments, the surgical incisions, sutures, dressings and bandages, it remarkably reduced the incidence of infections including gangrene.

Later in 1865, Lister. followed up by investigating the suitability of carbolic acid as a general wound antiseptic when an eleven year old boy was admitted to his Accident Ward for treatment. The boy had sustained a compound fracture in an accident when a cart wheel from a horse-drawn vehicle had run over his left leg. This mishap had caused a tibia bone fracture which pierced the skin of his lower leg. Normally a simple amputation would have been the only solution but would most likely have resulted in sepsis and death. Instead however Lister determined to test the efficacy and benefits of using carbolic acid to avoid possible infection and to improve the outcome. He first cleaned the wound of all blood clots and applied undiluted carbolic acid across the whole wound. After setting the bone and supporting the leg with splints, he soaked clean cotton towels in undiluted carbolic acid and applied them to the wound, covering them with a layer of tin foil, leaving them for four days. When he checked the wound he was pleasantly surprised to find no signs of infection, except for redness near the edges of the wound from mild burning by the carbolic acid. He renewed the dressing and and after a total of six weeks and was amazed to discover that the boy's bones had fused back together and no suppuration had occured so that the boy was able to walk home.

Lister had proved that prevention works and antiseptic surgery was born. Nevertheless his critics still considered Lister's methods to be complicated and cumbersome.

  • Over the next year Lister used carbolic acid antiseptic on nine patients, seven of whom came through surgery without infection.
  • Between 1864 and 1866, before the use of antiseptic treatment, 16 out of 35 (46%) of Lister’s amputation patients died in Lister's Male Accident Ward. In contrast, between 1867 to 1870 only 6 out of 40 (15%) died. A two thirds reduction in the mortality rate.
  • In 1867 his reputation was enhanced when he published "On the Antiseptic Principle in the Practice of Surgery" which outlined his experience and conclusions about the effectiveness of carbolic acid in preventing disease and its use to clean wounds and to sterilise medical instruments, catgut and bandages.
  • In 1871 Lister invented a new method of killing micro-organisms contaminating the operating theatre before they reached the wounds by means of an aerosol spray of carbolic acid which he successfully used in an operation to remove an abcess the size of an orange from Queen Victoria's armpit.

Lister was the first to apply the science of Germ Theory to prevent infection in wounds during and after surgery and despite the critics, his Antisepsis System revolutionised surgery and became the basis of modern infection control making it safe. His principles are still valid today and continue to save countless lives.


Vaccines and Immunology

In 1796 Edward Jenner discovered the use of vaccines to provide immunity against smallpox and other viral diseases. His work on immunology by means of vaccinaion was one of medicine's all-time life-savers.


In 1865 Pasteur following through on Jenner's discoveries identified further viral infections that could be successfully prevented by suitabele vaccines.


1858 The Great Stink - Causes and Effects

It seems hard to believe now, but like many cities in the 17th century, London's River Thames flowing through the city served as both its water supply and its sewer. This was obviously not healthy, but since there were few practical alternatives, it was at least tolerable so long as there was a high volume fresh water flowing in the river and a very low comparative volume of sewage polluting the water. The massive growth in London's population brought about by the Industrial Revolution however changed all that for the worse. While the water flow remained the same, by 1840 the volume of untreated human waste dumped by the sewer system into the river increased many times over as London's population increased to over two millions .As the population grew, so did the problem.

To make matters worse. both the domestic sanitary facilities and the sewer system which then existed were themselves inadequate for the tasks involved. The lack of indoor plumbing or standpipes in the streets meant people had little option but to take their drinking and washing water from the Thames unless they were one of the very few lucky enough to live by a pristine stream or a well containing pure water. Many households did not have a direct connection to the sewage system so that pathogens, urine and faeces, were thrown out into open drains or they lined the streets. This effluent was channelled by rainfall into the overloaded, mostly open, sewer system if there was a nearby access point. From there it flowed into the river so that the river itself became an enormous open sewer with an overpowering foul stench. Tons of lime were spread on the river banks and near the mouths of sewers discharging into the river to try and dissolve the toxic effluent, but with little effect.

Despite this seriously unhealthy situation, the city's main drinking and washing water supply continued to be drawn from the polluted Thames. One of the major consequences of imbibing the polluted water was the series of cholera outbreaks from which 40,000 Londoners died between 1931 and 1866. Unfortunately Victorians had no known cure for cholera and didn’t understand how it spread. Conventional wisdom at the time was that inhalation of ‘foul air' was widely thought to be responsible for the spread of this dreaded disease and the Thames was the obvious source of this miasma.


In 1849 London doctor John Snow after investigating an outbreak of infections in the population centered around a public water pump, published a paper showing that infections were not spread by foul air but by water-borne germs and that clean water was essential for preventing disease.

The same year, civil engineer Joseph Bazalgette was appointed as the Assistant Surveyor to the Metropolitan Sewers. Recognising the problem of water borne germs, he spent the next nine years creating plans for an ambitious new public sanitation system. In view of the enormous expected costs, each of his plans was rejected by the The Metropolitan Board of Works whose decisions were supported in 1854 by the Board of Health's Medical Council which (unjustifiably) denied Snow's theory.


The Great Stink - Action at Last

In 1858, London was experiencing a heatwave with temperatures in the sun of 118°F and the stench from the Thames was elevated to an unbearable level and known as "The Great Stink". As the water level in the river dropped, layer upon layer of rotting fecal matter, up to six feet (two meteres) deep in some places, had washed up on the muddy shores and was fermenting in the heat. At that time, the Houses of Parliament, built alongside the Thames, were undergoing refurbishment, due for completion in 1860, and the politicians themselves were experiencing the repulsive smell every day. In a futile attempt to neutralise the smell they doused the curtains with chloride of lime (a deodorant and sanitising bleach) and also poured it together with lime and carbolic acid directly into the water. Not having a significant effect, at last they decided to approve Bazalgette's latest sanitation system plan.

His scheme involved separating the flow of sewage from the river's fresh water flow.and consisted of an extensive system of concealed underground brick-lined sewers.

The River

The river was shallow and wide in parts and marshy in its lower reaches.This made it subject to local flooding when the flow was high, and caused it to deposit a noxious sediment of solid waste in the shallows along its shores when the flow was low. Bazalgette's solution was to construct embankments (known as levees in the USA) at either side of the river to confine it into a narrow open channel between the two embankments, thus separating it from the sewage. Fortuitously these embankments also acted as flood barriers preventing the river from spreading out over the land during storms.

The Sewers

The inadequate patchwork of existing local community sewage networks on the other hand would be expanded with new underground brick-lined drains to serve the full population. These local networks would also be interconnected by large main sewer pipes to funnel the sewage downstream towards to a suitable outlet on the shore away from population centres on the Thames estuary. These massive interconncting sewer pipes were designed to run alongside the embankments, or in some cases they would be concealed within the brick-lined embankments which were large enough to accommodate London's modern underground railway trains.

Implementing this plan required the replacement of 165 miles of old sewers and the construction of 1100 miles of new ones.

In addition, four pumping stations were required to pump the sewage along the 12 mile route across the undulating landscape between London and the sea. These pumps were needed to lift the sewage from low-lying areas to the intervening higher ground from which it could fall under the pull of gravity on its way to the sea where it would be dispatched on the outgoing tide. The four pumping engines needed for this task were designed by James Watt and were then the most powerful engines in the world.at the time.

  • In 1858, Parliament duly approved an expenditure of £2,500,000 (somewhere between £240million and over £1 billion in today's money) in order to undertake this extraordinary feat of engineering.
  • Starting in 1858, Bazalgette built London's first sewer network which is still in use today.
  • In 1866, these sewers almost immediately proved their worth since most of London was spared from a new cholera outbreak which hit part of the East End, the only section not yet connected to the new system.
  • Completed in 1875 the system not only helped to wipe out cholera in the capital but also decreased the incidence of typhus and typhoid epidemics.

Unfortunately John Snow died of a stroke in 1858 at the age of 45 before the construction started and did not live to see the successful implementation of his disease prevention policies.


Anaesthetics (Anaesthesia the "loss of sensation").

Before the develoment and common use of anaethesia, anxious patients were usually afraid of experiencing pain and often had to be forcibly restrained during the operation, hampering the surgeon's task. This tended to increase time on the operating table, and with it, increasing the possibility of blood loss and the chances of dying of shock as well as increasing the risk of infection.

Herbal pain-killers such as opium, derived from the sap of the opium poppy, were known in the Middle East from ancient times and eventually made their way to Europe in the middle ages. Their use however was not common since they were not well known nor were they easily available to the rudimentary medical profession at the time. There were also concerns by some about the possibilty of addiction. The advances of medical science in the 19th century led to the development of new and safer anaesthetics.


1799 Humphrey Davy discovered that inhaling Nitrous oxide gas produced euphoric effects which made him laugh, a property that led to its recreational use. He called it "laughing gas" and invited his friends to laughing gas parties. Noting that it also acted as a pain-killer, it was subsequently used to a limited extent as a general anaesthetic. Nitrous oxide is still used today as a pain-killer during childbirth and dental work. It is, also like opium, still used as a recreational drug though this is illegal in most countries.

1831 In an attempt to produce a cheap pesticide by combining whiskey with chlorinated lime, Ameican chemist Samuel Guthrie was the first to produce chloroform. He reported its accidental inhalation by his eight year old daughter who became temporarily unconscious, recovering a few hours later with no significant after effects. Chloroform subsequently became an important anaesthetic.

In 1844 American chemist and physician Charles Jackson demonstrated to his sudents at his private laboratory in Boston that the inhalation of ether causes loss of consciousness. He suggested to local dentist William Morton that ether could also be used as a local anaesthetic, not just for dental extractions. Moreton duly followed up, confirming this with experiments.

In 1846, in a public demonstration to prove its reliability, he successfully removed a tumour painlessly from a patient's neck. This was a major breakthrough in surgical practice and he wasted no time in publishing this discovery so that it was quickly adopted worldwide. He was less successful however in his attempts to sell patent rights for using the procedure.

1847 James Young Simpson, Professor of obstetrics at Edinburgh University, looking for an improvement on ether used as an anaesthetic he discovered the work of Guthrie who had reported the anaesthetic effects of chloroform in 1831.

1853 John Snow, a London doctor, attending the birth Queen Victoria's eighth child Prince Leopold, prescribed the use of chloroform as the anaesthesic to be used for pain relief during the prcedure. Victoria was reputed to be a daily user of laudanum, that is opium disolved in 90% proof alcohol, to alleviate her aches and pains. Despite resrvations by many in the medical profession concerrned about the safety of this alternative new drug, the queen inhaled the chloroform from a handkerchief which had been soaked in the anaesthetic and was delighted with its effect. Subsequent publicity was instrumental in increasing its adoption.

In 1854 Snow went on to investigate an outbreak of infections in the population centered around a public water pump. He showed that infections were spread by water-borne germs and that clean water was essential for preventing disease.


The increased adoption of anaesthesia during surgical operations in 1846 meant that patients no longer had to be awake during operations nor did they experience pain. Surgeons no longer had to cope with patients writhing in agony so that operations were faster and there was less chance of dying of shock or loss of blood. This simplified operating procedures while at the same time improving the outcomes.

In combination with the use of antiseptics, anaesthesia enabled major reductions in patient mortality.


In the early 20th century, laudanum, opiates and some other narcotics were recognised as dangerous and addictive, and since other alternatives were available, most European and North American countries banned or restricted their manufacture and use.


Life Expectancy

Although conditions in the towns were sometimes grim, the romantic view that industrialisation was a catastrophe and that rural life before these changes took place was idyllic, is unrealistic. The reality of previous rural life was also less than ideal. It had been a society of subsistence agriculture ruled by an elite, landed aristocracy. It may have been a more healthy environment in the country but in the eighteenth century, before the industrial revolution, the estimated average life expectancy at birth (LEB) in England was only 37 years, though accurate statistics are not available. However that does not mean that people died when they reached 37. This was because the average life expectancy at birth was very low due to high infant mortality with 18% of infants dying in their first year and 31% of newborns dying before the age of fifteen. By 1850 in England and Wales the estimated life expectancy at birth (LEB) had risen to 42, but over 25% of children still died before the age of five. For those who survived, life expectancy rose to 57. Moreover, 10% of people born in 1850 lived to over 80.

This was due to advances in medical science, improved sanitation and better nutrition during the intervening years. The resulting improvements in public health did not take place instantaneously. It took time for the benefits of these changes to be realised by individuals and even longer for them to spraed throughout the population at large, but they laid the foundations for much more rapid improvements in the life expectancy of subsequent generations.


People still lived in poverty. They still used child labour. Incomes were very low and irregular or uncertain, the population was generally illiterate and subject to the demands of landlords who were not necessarily any more benevolent than future factory owners and there were fewer opportunities for personal development and social mobility to escape from this poverty.


Unfortunately many people still write about "The Causes of the Industrial Revolution" as if it was a calamity. A more apt title would replace the word "Causes" with the word "Enablers" to recognise the positive aspects of the changes in the nation's economic welfare which it brought about.


The Industrial Revolution marked the end of feudalism and the beginning of social mobility.


How did this great transformation come about?

The industrial revolution is characterised by the development of an industrial economy resulting from the ever increasing flow of innovative practical products based on the application of new technologies, mechanised production methods and the availability of mechanical power to make it happen. But for these new ideas to flourish, they had to fall on fertile ground and these conditions were found in Britain in the second half of the eighteenth century and the first half of the nineteenth century.

  • The previous two hundred years had seen the flowering of the Scientific Revolution when great thinkers, no longer hampered by censorship of new ideas by the church, provided a theoretical basis for the way things worked. Amongst others, Newton provided the Laws of Motion and Calculus, Boyle and Charles provided the Gas Laws and Hooke provided the Law of Elasticity.
  • Improved methods of time and temperature measurement were also available enabling more accurate scientific experiments to be performed.
  • The country had six universities, founded before 1600, carrying out scientific research and teaching. (Oxford, Cambridge, St Andrews, Glasgow, Aberdeen, Edinburgh)
  • Scientific societies such as the Royal Society (founded 1660), the Lunar Society of Birmingham (dating from 1765) and the Royal Institution (founded 1799), encouraged the sharing and dissemination of ideas.
  • Towards the end of the eighteenth century and during the first half of the nineteenth century, Literary and Philosophical Societies were founded in many British towns and cities, particularly in the north. Known as the "Lit and Phils" they provided the opportunity to discuss intellectual issues of the day and to sponsor cultural activities. Amongst their aims were education and the advancement of science and technology but in the days when there were few forms of public entertainment and recreation, they coincidently provided the opportunity for socialising and networking and so attracted a large membership. Lectures and presentations at the "Lit and Phils" were thus well attended and news about technology and potential investment opportunities reached a wide audience of interested and often influential people. Thoughts evolved from the familiar certainties of the past to the self confident exploration of the potential that the future may bring. Self-help and optimism replaced sufferance of the status quo.
  • The country was being denuded of wood used for fuel but it was self sufficient in energy from coal, which contained more than three times the energy of wood, as well as hydro power. Similarly it had ample supplies of many key raw materials such as iron, lead, copper and tin ores and limestone (used in iron smelting and building materials).
  • The invention of the steam engine gave the country a head start in liberating factories from inefficient manual powered and horse drawn machines or water wheels dependent on unreliable water supplies, enabling improved efficiency and reduced manufacturing costs.
  • Good, stable economic conditions prevailed in the country.
  • Most European countries at the time were ruled by absolute monarchies. Decision making tended to be concentrated in a few hands and high up on their priority list were self preservation and control of their subjects, often accompanied by expansionist territorial aspirations backed by military power.
  • Britain too had international aspirations but by contrast, it had just agreed a "Bill of Rights" in 1689 restricting the power of the monarchy and enhancing the power of parliament. While power was not completely devolved, members of parliament ensured that regional issues got a sympathetic hearing. Priorities such as local transport infrastructure development and the promotion and protection of commerce were higher up the priority list.

  • The development of the road and canal transport infrastructure dramatically reduced the costs of transporting heavy and bulky raw materials such as coal, iron ore and clay for the potteries as well as the distribution of finished goods enabling new resources to be tapped and new markets to be reached. This was accelerated by the advent of the railways whose higher speeds enabled the distribution of fresh foods over greater distances, boosting the agricultural and fishing industries.
  • Certain regions of the country had well organised cottage industries with established industry skills, supplies and trade routes which provided a fertile environment for the introduction of new technologies. A prime example was Lancashire which, because of its damp climate, had a large cotton processing industry with a concentration of textile producers using cotton imported from qualified trading partners. (Originally from India, but progressively from the West Indies and the American colonies.)
  • The rule of law prevailed with contract law and patent law providing legal protection to business and to inventors.
  • The British Empire facilitated extensive international trade networks providing access to foodstuffs and raw materials, mainly cotton, and a ready market for manufactured goods.
  • Profit flows from trade with the colonies accumulated in Britain creating a capital surplus which was available to be invested in factories, machinery, canals and railways. Similarly this influx of wealth created a new demand for manufactured goods for use in the home.
  • The British government encouraged international trade and protected it with a strong global naval presence.
  • Joint stock companies were able to provide funding enabling longer term or large projects to be undertaken.
  • The country had a tradition of free market capitalism supported by parliament and a stock exchange (The Royal Exchange opened by Queen Elizabeth I in 1571) to enable the trading of shares.
  • Insurance was available to underwrite risks. (Insurance deals were traded in Lloyd's Coffee House in London from 1688, initially, mainly for maritime risks)
  • Towards the end of the period, Building Societies were established enabling people to purchase their own property and Hire Purchase Contracts were introduced in support of the sales of sewing machines enabling the set up of small family businesses, both of which in their small way helped to bring about the beginnings of social mobility and the possibility for more people to realise their full potential.

The industrial revolution started in Britain but it was quickly followed in Western Europe, then North America, followed by Japan and eventually the rest of the world (or at least most of it).


1750 Nollet demonstrated the astonishing efficiency of electrostatic spraying, an idea which was not put to practical use until it was rediscovered by Ransburg in 1941.


1750 English physicist John Michell describes magnetic induction, the production of magnetic properties in unmagnetised iron or other ferromagnetic material when it is brought close to a magnet. He discovered that the two poles of a magnet are of equal strength and that they obey the inverse-square law for magnetic attraction in "A Treatise on Artificial Magnets".


1752 German astronomer Tobias Mayer published the method of determining logitude by means of lunar distances together with associated lunar distance tables. The method used only a sextant and the local times were derived from observations of the position of the Moon relative to fixed celestial objects. See more about lunar distances.


1752 French experimenter Thomas François Dalibard, assisted by retired illiterate old dragoon M. Coiffier, carried out an experiment proposed by Benjamin Franklin. They set up their experiment at Marly la Ville and from a safe distance (in Dalibard's case eighteen miles away) they waited for a storm. They used a long pointed iron rod, placed upright in a wine bottle and insulated from the ground by more glass bottles, to attract a lightning discharge from a thunder cloud. Coiffier subsequently drew electrical sparks from the charged rod to prove Franklin's theory that thunder clouds contain electricity and that it can be conducted down a metal rod.


1752 A man of many talents, Benjamin Franklin one of the leaders of the American Revolution and founding fathers of the USA, journalist, publisher, author, philanthropist, abolitionist, public servant, scientist, diplomat and inventor carried out his famous kite experiments in 1752, one month after Dalibard, and invented the lightning rod.

Franklin proposed a "fluid" theory of electricity and outlined the concepts of positive and negative charges, current flow and conductors coining the language to describe them. Words such as battery (from an array of charged glass plates, and later, a number of Leyden Jars), charge, condenser (capacitor), conductor, plus, minus, positively, negatively, armature, electric shock and electrician all of which we still use today.


Du Fay in 1733 had first described the concept of two types of electric charges, "vitreous" and "resinous". Franklin explained that current flow was the flow of a positive charge towards negative charge to cancel it out. Using the water analogy he named the point of high potential, (from which the water flows) as the positive terminal with the lower potential terminal being negative. Current can also be associated with the flow of positive ions from the positive terminal to the negative terminal, or with the flow of negatively charged electrons from the negative terminal to the positive terminal. Nowadays we tend (lazily) to associate current flow exclusively with electron flow, overlooking the equally valid positive ion flow, which leads to the confusion and the incorrect charge that Franklin got it wrong by defining the current flow in the opposite direction from which electrons flow.


The purpose of Franklin's kite experiment was to confirm that lightning was another manifestation of electricity. Legend has it that he flew a kite into a thunder cloud to pick up an electric discharge from the cloud. The electric charge was then conducted down the wet kite string to which a key had been attached near the ground and that sparks were emitted from the key which were used to charge a Leyden jar, thus proving that an electric charge came from the clouds.

Whilst it may be heresy to suggest that Franklin did not actually carry out the kite experiment for which he is famous, there are no reliable witnesses to this event and it is a fact that nobody, including Franklin, has yet been able to duplicate this experiment in the manner he described, and few have been willing to try. One who did was Professor Georg W Richmann, a Swedish physicist working in St Petersburg, who was killed in the attempt on 6 August 1753 He was the first known victim of high voltage experiments in the history of physics. Benjamin Franklin was lucky not to win this honour.


1752 Johann Georg Sulzer notices a tingling sensation when he puts two dissimilar metals, just touching each other, on either side of his tongue. It became known later as the battery tongue test: - the saliva acting as the electrolyte carrying the current between the two metallic electrodes.


1753 A proposal is submitted in an anonymous letter to the Scotsman Magazine signed "C.M.", generally attributed to Scottish surgeon Charles Morrison, for 'An Expeditious Method of Conveying Intelligence'. It described an electrostatic telegraph system using 26 insulated wires to conduct separate charges from a Leyden Jar causing movements in small pieces of paper on which each letter of the alphabet is written.


1757 French botanist Michel Adanson proposed that the discharge from the Senegalese (electric) catfish could be compared with the discharge from a Leyden jar. The ability of certain torpedo fish or sting rays to inflict electric shocks had been known since antiquity however Adanson's theory was new. It was later proved by British administrator and M.P., John Walsh, secretary to Clive of India, who in 1772 managed to draw a spark from an electric eel. It is quite possible that news of Walsh's experiment influenced Galvani to begin his own experiments with frogs.


See also Cavendish's explanation of the reason why a shock could be delivered without an associated spark.


1759 German mathematician Franz Maria Ulrich Theodosius Aepinus published his book, An Attempt at a Theory of Electricity and Magnetism. The first work to apply mathematics to the theory of electricity and magnetism, it explained most of the then known phenomena.

In 1789 Aepinus also made the first variable capacitor which he used to investigate the properties of dielectrics. It had flat plates which could be moved apart and different materials could be inserted between them. Volta also laid claim to the invention of this device and to giving it the name of "capacitor".


1759 English civil engineer, John Smeaton constructed a whirling arm device for investigating the aerodynamic properties of windmills and windmill vanes. It was based on an earlier design by Benjamin Robins and had the same functions as a modern wind tunnel but instead, it consisted of a vertical shaft supporting a rotating arm on which to mount models of windmill vanes which could be made to pass at high speed in a circular path through the still air to determine their relative efficiency. (See diagram of Smeaton's Whirling Arm) At the same time the blades could be rotated by means of a falling weight attached by a cable to a pulley on the windmill shaft. It was used to investigate the effects of camber and angle of attack of the blades.

Using the apparatus, Smeaton determined that the force L on a plate or blade (or aerodynamic lift in the case of wings) is given by:

L=kV2ACL

where:

k is the drag in pounds weight of a 1-square-foot (0.093 m2) plate at 1 mph, known as the Smeaton coefficient

V is is the velocity of the air over the plate in miles per hour

A is the Area of the plate in square feet

CL is the magnitude of the lift relative to the drag of a plate of the same area, known as the lift coefficient


This relationship is known as the lift equation and was used by the Wright brothers in the design of their wings and propellers, though from their wind tunnel experiments they determined a more accurate value for the coefficient k.


Smeaton also used hydraulic models and similar techniques to calculate the efficiencies of water wheels.


He is more well known for the many bridges, canals, harbours and lighthouses that he built. He coined the term "civil engineers" and in 1771 founded the Society of Civil Engineers the forerunner of the Institution of Civil Engineers.


1761 Scottish chemist and physicist Joseph Black working at Glasgow University, discovered that ice absorbs heat without changing temperature when melting and similarly the temperature of boiling water does not change as heat is added to create steam. Between 1759 and 1763 he evolved the theory of latent heat for a heat flow that results in no change of temperature, that is, for the heat flows which accompany phase transitions such as boiling or freezing. He also showed that different substances have different specific heats, the amount of heat per unit mass required to raise its temperature by one degree Celsius.

James Watt was his pupil and assistant.


1761 Self taught, English engineer, James Brindley son of a farmer, opened the Bridgewater Canal which he had designed and built for Francis Egerton the third Duke of Bridgewater to carry coal from his coalmine at Worsely to market in Manchester, ten miles away. Transporting coal by canal boat rather than by pack horse reduced its cost by 50%. The Bridgewater Canal was the first British canal not to follow an existing water course. Instead he chose a more level route by following the contours of the land to simplify construction, avoiding embankments and tunnels as well as the need for the traditional, time-wasting locks. It did however require the construction of an aqueduct at an elevation of 39 feet (13 M) to carry it over the River Irwell, a feature which was unique at the time. The sight of a barge floating high up in the air became one of the first tourist attractions of the Industrial Revolution.


Brindley went on to build another 300 miles of canals. His Bridgewater canal marked the beginning of Britain's golden era of canal building from 1760 to 1830 during which the country's new inland waterway system linked up the otherwise isolated local canals serving the country's major cities into a national network, greatly improving the nation's transport infrastructure.

Before the canal system was built, the transport of bulky goods was prohibitively expensive. They were either sent by sea or overland by pack horse. This meant that users had to be located close to their source of supply or to the docks. Factories depending on steam engines had to be located near to coal mines. But canals changed all that. One canal boat, operated by one man and a horse, could carry as much as a hundred pack horses. Transport by canals cut the costs for industry and provided economic justification for new ventures which previously may not have been viable. Canals were the Motorways of the eighteenth century.

An practical example of the economic benefits of canals was the saving the pottery industry centred on Stoke on Trent. The potteries were originally located there because of the availability of suitable clay and the coal to fire it, but in the 1760s when supplies of local clay were becoming exhausted and markets demanded pottery made with finer clay from other sources, Brindley's Trent and Mersey Canal, opened in 1777, enabled the potters to bring in clay from Dorset, Devon and Cornwall by canal from the seaport rather than to move their business to other locations which may have had the clay but not the coal.


The Trent and Mersey canal necessitated the construction of the Harecastle Tunnel which was 1.64 miles (2633 m) long. It took seven years to construct and when it was completed in 1777 it was more than twice the length of any other tunnel in the world at that time. It was however only 9 feet (2.74 m) wide since it did not have a towpath so that boats had to be "legged" through it by men lying on their backs and "walking" on the roof taking 2 to 3 hours to pass through the tunnel. It was also too narrow to take boats going in both directions so boats had to be grouped and one way system allowed the direction of travel to be changed after each group had passed through. Some enterprising local men offered their service as "leggers" to help speed the boats through.

Brindley died before the canal was completed.


To relieve congestion a second, wider tunnel with a towpath, parallel Brindley's tunnel was commissioned fifty years later. It was slightly longer at 1.66 miles (2675 m) and was built by Thomas Telford. Taking just three years to complete, it was opened in 1827.


The advent of George Stephenson's faster rail transportation brought this golden era to an end.


1764 After the introduction of the flying shuttle which improved the productivity of the weaving industry, the demand for cotton yarn outstripped supply, and the cottage industry producing it, one thread at a time, on traditional spinning wheels could not keep up. In the 1760s several inventors developed machines to mechanise this process.

The first was James Hargreaves of Blackburn, Lancashire who in 1764 invented a multi-spool spinning frame which dramatically reduced the labour content of the work. It was called the spinning jenny ("jenny" derived form "engine"), a machine for spinning, drawing and twisting cotton. It consisted eight spindles driven by a single large handwheel which turned all the spindles. Cotton was drawn from eight separate rovings, long thin bundles of cotton fibre, lightly clasped between two horizontal bars then wound onto the spindles. The spindles were mounted on a moveable carriage which allowed the roving to be stretched as it was pulled away from the clasping bars, imparting a twist to the cotton. He sold several machines but kept his activities secret at first. However the selling price of yarn fell as the production increased while at the same time the employment of local spinners was reduced culminating in his house being attacked and his machines smashed. As a result Hargreaves moved to Nottingham in 1768 where he eventually patented his machine in 1770.


An improved spinning machine, called a spinning frame was invented in 1767 by John Kay a clockmaker from Warrington, Lancashire (No relation to John Kay of Bury) who made improvements to Hargreaves design. Instead of the simple clasp used by Hargreaves to stretch the cotton fibre roving, the roving was passed between three sets of rollers, each set rotating faster than the previous one, progressively reducing the thickness of the roving and increasing its length before a strengthening twist was added to the yarn by a separate mechanism. This produced a much finer and stronger cotton yarn. The spinning frame was also called a water frame when it was powered by a water wheel.

At the time Kay was employed by Richard Arkwright, of Preston, Lancashire, who controversially patented Kay's machine in 1769 under his own name without telling Kay. This resulted in a scandal and caused a protracted patent dispute which involved yet another inventor of a spinning machine, Thomas Highs, of Leigh, Lancashire, who had worked with both Arkwright and Kay who were both familiar with his work. Highs had invented several devices for processing wool and cotton but didn't have the finance to develop his ideas and like Hargreaves, he had worked in secret on his spinning machine which he claimed to have patented in 1769. All the protagonists eventually lost out in the legal proceedings as the jury found against Arkwright but no rights were ever transferred to Highs or Kay.


As the technology of the day advanced, the available power to turn the spindles was increased, evolving from the machine operator himself, to horses, then water wheels and finally to steam engines (now electric motors). This enabled much larger spinning frames carrying over 100 spindles to be constructed, greatly increasing the productivity.


Arkwright was more of a businessman, rather than an inventor. In 1771, he built the world's first water-powered textile mill at Cromford in Derbyshire where he installed production equipment driven by water power in a highly a disciplined factory with workers operating machines in 13 hour shifts with little free time, replacing the local cottage industries where whole families, including their children, developed specialist skills working together at home on traditional crafts and trades. The factory work by comparison was unskilled with the work divided into short repetitive tasks and the employees, in both situations, were mostly illiterate since this was before the advent of universal education in Britain. Most of the employees were women and children, some as young as seven, though this was later increased to ten years old. It sounds horrific, but for his times, Arkwright was an enlightened employer, building houses for his employees and providing the children six hours of education per week so they could take on tasks such as record keeping. His Cromford Mill was the start of the factory system which was quickly copied by others and became a hallmark of the Industrial Revolution.


1765 Matthew Boulton who traded in ornamental metalware such as buttons, buckles and watch chains which were made in small workshops in and around Birmingham, opened the Soho Manufactory at Soho near Birmingham to bring all his business activities together under one roof, under his own ownership and control. Previously the goods were manufactured either in Boulton's own workshops or in the workshops of local independent artisans of which there were many in the Birmingham area.

The Soho Manufactory was a three-story building which housed a collection of small specialist workshops carrying out a range of metalworking process such stamping, cutting, bending and finishing as well as showrooms, design offices, stores, and accommodation for the employees.

Boulton was a benevolent employer. Instead of subcontracting work to other workshops in town, he employed the same skilled craftsmen who had worked in the workshops which he had displaced. Working conditions were good, employment was secure and he paid them well. Labour saving jigs and tools were used to improve productivity as well as the quality of the goods produced, designs were rationalised to achieve economies of scale by using interchangeable or common components. In this way Boulton was able to take on high volume production of items such as coins for the mint as well as fine, high quality products such as jewellery, silverware and plated goods.

He refused to employ young children as in some other industries and later introduced a very early social insurance scheme, funded by workers' contributions of 1/60th of their wages, which paid benefits of up to 80% of wages to staff who were sick or injured.

At its height the factory employed a thousand people in what was the largest and most impressive factory in the world becoming Birmingham's foremost tourist attraction.


Boulton's manufactory established the factory system in the metalworking industry, mirroring changes being made in the textile industry. Another step in the Industrial Revolution.


In 1769 Matthew Boulton also provided the financial backing and the manufacturing capability for the commercialisation of Watt's Steam engine and his Soho plant became the world's first factory to be powered by steam.


1765 A group of prominent figures in the British Midlands, including industrialists, natural philosophers and intellectuals, set up an informal learned society later called the Lunar Society because it met during the full moon to take advantage of the lighter evenings for travelling home after meetings. Members included Matthew Boulton, James Watt, physician and inventor Erasmus Darwin, grandfather of Charles Darwin discoverer of the Theory of Evolution, Josiah Wedgwood and Joseph Priestley. Benjamin Franklin also attended a meeting of the society while visiting Birmingham and kept in touch with members.


1766 Swiss physicist, geologist and early Alpine explorer Horace Benedict de Saussure invents the first true electrometer for measuring electric potential by means of attraction or repulsion of charged bodies. It consisted of two pith balls suspended by separate strings inside an inverted glass jar with a printed scale so that the distance or angle between the balls could be measured. It was de Saussure who discovered the distance between the balls was not linearly related to the amount of charge.


1766 Hydrogen discovered by Henry Cavendish by the action of dilute acids on metals.


1767 English clergyman, philosopher and social reformer Joseph Priestley at the age of 34 made his first foray into the world of science with the publication of a two-volume History of Electricity in which he argued that the history of science was important since it could show how human intelligence discovers and directs the forces of nature. The previous year in London he had met Benjamin Franklin who introduced him to the wonders of electricity and they became lifelong friends. Priestley's first discovery, also in 1767, was that Carbon conducts electricity.


Though he had no scientific training, Priestley is however better known as a chemist. He isolated Carbon dioxide, which he called "fixed air", and in a paper published in 1772, he showed that a pleasant drink could be made by dissolving the gas in water. Thus was born carbonated (soda) water, the basis of the modern soft drinks industry.

He was a great experimenter discovering Nitrous oxide (laughing gas) and several other chemical compounds and unaware of the work of Scheele in 1774 he independently discovered Oxygen. Priestley was no theorist however and he passed on his results to the French chemist Lavoisier who repeated the experiments taking meticulous measurements in search of underlying patterns and laws governing the chemical reactions.

Experimenting with growing plants in an atmosphere of Carbon dioxide, Priestley observed that the plants consumed the Carbon dioxide and produced Oxygen, identifying the process of plant respiration and photosynthesis. This was the first connection between chemistry and biology.


As a reformer, Priestley was a strong supporter of the 1776 American and the 1789 French Revolutions. This brought him into conflict with conservatives and in 1791 angry mobs burnt down his house and his church destroying many of his manuscripts. The intimidation continued until 1794 when the aristocratic Lavoisier, on the opposite side of the revolutionary fence from Priestley, was executed by French revolutionaries. A few weeks later Priestley emigrated to America to escape persecution spending the rest of his life there.


1769 The introduction of Watt's Steam Engine was a key event in the Industrial Revolution.

James Watt, a Scottish instrument maker working at the University of Glasgow in 1763 was given the job of repairing a model of Newcomen's 1712 steam engine. He noted how inefficient it was and between 1763 and 1775 he developed several improvements to the design. The most important of these was the introduction of a separate, cold, chamber for condensing the steam which avoided the need to heat and cool the main cylinder which could be kept hot while the steam was condensed in the cold condensation chamber. (See diagram of Watt's Steam Engine)

As in Newcomen's engine, steam introduced under the piston drove it to the top of its stroke at which point the steam was shut off, but the atmospheric power stroke was different. When the piston reached the top of its stroke a valve at the lower part of the cylinder opened releasing the steam into the cold chamber where it condensed, reducing the pressure under the piston which was pushed down by atmospheric pressure on the top of the piston. The use of the separate condenser reduced the heat losses in every cycle and led to a dramatic improvement in the fuel efficiency and speed of the engine and was the basis of Watt's patent in 1769.


Watt's original engine, like Newcomen's, generated most of its mechanical power, that is its atmospheric power, on the downstroke but not on the upstroke and this intermittent power delivery was not suitable for producing smooth, continuous rotary motion. To overcome this drawback, Watt developed a second innovation which was to introduce steam on top of the piston at the top of its stroke as well as below the piston at the bottom of its stroke. This second steam supply pushed the piston down with the steam being exhausted from above the piston into the cold chamber at the end of the down stroke thus creating a double-acting engine with the steam pushing and the vacuum pulling the pistons on both the up and down strokes. A double benefit of this system was that it also improved the efficiency still more. This idea was later developed by Trevithick and others for use in high pressure, horizontal engines.

(See Double Acting Piston).


Watt initially had difficulty in both manufacturing and commercialising his engine but this problem was solved when he entered into partnership in 1769 with Matthew Boulton, a Birmingham manufacturing entrepreneur. Watt had sought help from Boulton to produce the precision components for his steam engine and discovered a willing partner since Boulton's production had often been interrupted by the unreliable water supply to the water wheel powering his Soho factory. The Boulton and Watt company they founded was able to fund the further development of Watt's engines and to manufacture them with improved precision at Boulton's Soho plant. Their engines used only 20% to 25% of the coal used by the Newcomen engines to generate the same power and Boulton was instrumental in securing a patent for the steam condenser which meant that any user of the condenser technology had to pay substantial monthly royalties to the company and this was rigidly enforced. Boulton's Soho plant became the world's first factory with machines powered by a steam engine.


In 1788, Watt invented the centrifugal or flyball governor to provide speed control for his steam engines. An early example of an automatic control system. See diagram of Watt's Flyball Centrifugal Governor.

See more examples of Early Control Systems.


The steam engine was quite literally the driving force behind the Industrial Revolution, freeing people from back breaking work, providing prodigious mechanical power to drive factories and machines enabling a myriad of applications as well as powering the railways thus facilitating trade and travel. The prime movers used for driving the first electricity generating plants by Schuckert, Edison and Ferranti starting in 1878 were also powered by large reciprocating steam engines based on James Watt's technology. The result was that Watt is commonly credited as the father or inventor of the steam engine and with bringing about the birth of and exploitation of this technology but there were many other contributors.


The following are some of the other key technologies and inventions associated with the development of the steam engine and its applications.


1770 French military engineer, Nicolas-Joseph Cugnot built his "fardier à vapeur", a three wheeled, steam driven military tractor, the world's first self propelled road vehicle, based on a smaller model he had produced the previous year. It was a mechanised version of the massive two-wheeled horse-drawn dray or wagon, known in France as a "fardier", used for transporting very heavy military artillery equipment.The boiler and driving mechanism were mounted on a single front wheel at the front of the vehicle replacing the horses. (See picture of Cugnot's Steam Carriage).


The engine used two vertically mounted single acting pistons, acting directly over the wheel, one on each side, with the piston rods connected to a rocking bar, pivoted at the centre, which allowed the piston movements to be synchronised in opposite directions. High pressure steam was applied alternately to the pistons so that the power stroke pushing one piston down caused the opposite piston to move back up ready to start its power stroke. Mounted on the driving axle were two disks one on each side of the single driving wheel, each disk with a ratchet or notches around its circumference. Power was transferred to alternate sides of the wheel by means of the piston rods with pawls which engaged on the ratchets on the down stroke to turn the wheel and slid over the ratchets on the up stroke while the drive was transferred to the disk on the opposite side of the wheel. This arrangement is considered to be one of the early successful devices for converting reciprocating motion into rotary motion. It was also the fore-runner of the freewheel mechanism.


The driving wheel and engine assembly were articulated to the rest of the cart and steering was by means of a lever (tiller steering) which turned the whole driving assembly including the boiler. The vehicle weighed in at over 2 tons and was designed to carry a load of 4 tons at a speed of 2.5 miles per hour. The massive boiler overhung the front of the wheel and made the vehicle somewhat unstable and, since there was no provision for carrying water or fuel, the vehicle needed to stop every ten to fifteen minutes to replenish the water and fuel and relight the boiler fire to maintain the steam pressure.


Cugnot was ahead of his time. Trials in 1771 by the French Army showed up the vehicle's limited boiler performance and difficulties in traversing rough terrain and climbing steep hills and rather than developing the invention, they abandoned the experiment. In 1772 Cugnot was awarded a pension by King Louis XV for his work but this was withdrawn with the start of the French revolution in 1789, and he went into exile in Brussels, where he lived in poverty until he was invited back to France by Napoleon Bonaparte shortly before he died in 1804. His fardier was kept at the military Arsenal until 1800 when it was transferred to the Conservatoire National des Arts et Métiers where it remains on display to this day.


See more about Steam Engines.


1771 The world's first machine powered factory began operations in Cromford, Derbyshire. English inventor Richard Arkwright pioneered large scale manufacturing using a water wheel to replace manual labour used to power the spinning frames in his cotton mill.


1771 German-Swedish pharmaceutical chemist, Carl Wilhelm Scheele discovered Oxygen and two years later Chlorine. A prolific experimenter he is also credited with the discovery of the gases Hydrogen fluoride, Silicon fluoride, Hydrogen sulfide, Hydrogen cyanide. In addition he isolated and characterised glycerol, lactose, and ten of the most familiar organic acids including tartaric acid, citric acid, lactic acid and uric acid.

He was also the first to report the action of light on Silver salts which became the basis of photography for over 180 years.


He received very little formal education and lived a simple life in a small town so his many achievements received little publicity. One result of this comparative obscurity is that others independently retraced his paths and were later credited with the discoveries he had already made, Priestley for Oxygen in 1774 and Davy for Chlorine in 1810.


Scheele was found dead in his laboratory at the age of 43, his death probably caused by exposure to the many poisons with which he worked. It was not unknown for scientists of his day to taste the chemicals with which they were working.


1774 An electrostatic telegraph is demonstrated in Geneva, Switzerland by Frenchman George Louis LeSage. He built a device composed of 24 wires each contained in a glass tube to insulate the wires from each other. At the end of each wire was a pith ball which was repelled when a current was initiated on that particular wire. Each wire stood for a different letter of the alphabet. When a particular pith ball moved, it represented the transmission of the corresponding letter. Intelligible messages were transmitted over short distances and LeSage's system is considered to be the first serious attempt at making an electrical telegraph.


1775 Like many experimenters of his time Alessandro Volta constructed his own Perpetual Electrophorus (that which carries off electricity) to provide a regular source of electricity for his experiments. It was crude and consisted of a resin plate on which was rubbed cat's fur or a fox tail and another insulated metal plate for picking up the charge.


1775 In response to the demands of the armaments industry the nascent steam power industry English engineer John Wilkinson made one of the first precision machine tools, a cylinder boring machine. His machine secured for him the largest share in the profitable business of supplying cannons in the American War of Independence. Wilkinson is reputed to be Britain's first industrialist to become a millionaire.


1775 Richard Ketley, the landlord of Birmingham's Golden Cross Inn, founded the first Building Society. It was a mutual financial institution owned by its members, originally offering them savings and mortgage lending services. Members of Ketley's society paid a monthly subscription to a central pool of funds which was used to finance the building of houses for members, which in turn acted as collateral to attract further funding to the society, enabling further construction. The idea quickly caught on and building societies were soon established in many cities of the UK. More recently, building societies have expanded into the provision of banking and related financial services to their members.


1779 The world's first Iron bridge, built across the River Severn Gorge at Coalbrookdale in Shropshire, was opened. It was designed by Thomas Farnolls Pritchard a local architect from Shrewsbury with a span of a 100 feet (30 m) and was built by the Iron maker Abraham Darby III, grandson of Abraham Darby, and is still in use as a pedestrian bridge today. The bridge is a surprisingly graceful design, build from cast iron, but since there was no experience in using cast Iron, or any other metal, as a structural material the design used techniques based on the more familiar carpentry using slender, custom designed castings in compression, connected together using mortise and tenon and blind dovetail joints.

The bridge was an engineering marvel in its day. See photograph and details of the Coalbrookdale Ironbridge.


Shares were issued in 1775 to raise the £3,200 estimated cost of the bridge, but Darby found it difficult to find investors and had to give a personal guarantee to cover any costs incurred in excess of this estimate. He was awarded the contract to build the bridge and to supply the iron work from his Coalbrookdale plant and construction was eventually started in 1777 but the actual cost of building the bridge turned out to be £6,000 and resulted in Darby being in debt for the rest of his life.


1779 English inventor, Samuel Crompton invented the spinning mule so called because it is a hybrid which combined the moving carriage of Hargreaves' spinning jenny with the rollers of Arkwright's water frame in the same way that a mule is the product of cross-breeding a female horse with a male donkey. The spinning mule was faster and provided better control over the spinning process and could produce several different types of yarn. It was first used to spin cotton, then other fibres enabling the production of fine textiles.


1780 English inventor James Pickard patented the crank and flywheel to convert reciprocating motion of Newcomen's engine to rotary motion. He offered the patent rights for his device to Boulton and Watt in return for the rights to use Watt's patent for the separate condenser. Watt refused and instead designed a sun and planet gear to circumvent Pickard's patent. Once Pickard's patent expired, Boulton and Watt adopted the crank drive in their engines. The Sun and planet gear was actually designed in 1781 by William Murdoch, an employee of Boulton and Watt, but it was patented in Watt's name.


The Sun and planet gear mechanism used two spur gears and was much more complex then the crank mechanism. In this application, the sun gear was fixed to the axle or output shaft and did not rotate about the axle, rather it rotated with the axle. The planet gear also does not rotate on its axis but was fixed to the end of the connecting rod. The reciprocating motion of piston causes the end of the connecting rod on which the planet gear wheel is mounted to trace a circular path around the sun gear causing the sun gear, and hence the output shaft to which is attached, to rotate.


See more about Steam Engines.


1782 French mathematician Pierre-Simon Laplace, building on earlier work by Swiss mathematician Leonhard Euler, develops a mathematical operation now called the Laplace Transform as a tool for solving linear differential equations. The most significant advantage is that differentiation and integration become multiplication and division, respectively. This is similar to the way that logarithms change an operation of multiplication of numbers into the simpler addition of their logarithms. By applying Laplace's integral transform to each individual term in differential equations, the terms can be rewritten in terms of a new variable "s" and the equations are converted into polynomial equations which are much easier to solve by simple algebra. The solutions to the original problems are retrieved by applying the Inverse Laplace Transform.

This technique simplifies the analysis control systems and analogue circuits which are characterised by time varying differential equations. Laplace's method thus transforms differential equations in the time domain into algebraic equations in the s-domain.


Between 1799 and 1825 Laplace published in five volumes "Traité de Mécanique Céleste", Celestial Mechanics, a description of the workings of solar system based on mathematics rather than on astronomical tables. In it, he translated and expanded the geometrical study of solar mechanics used by Newton to one based on calculus.

A copy of the work was presented to Napoleon who is reported to have asked why there was no mention of God in the study, to which Laplace is alleged to have replied "Je n'avais pas besoin de cette hypothèse-là". ("I had no need of that hypothesis.").


Laplace also developed the foundations of probability theory which he published in 1812 as "Théorie Analytique des Probabilités". Prior to that, probability theory was solely concerned with developing a mathematical analysis of games of chance as exemplified by Pascal. Laplace applied the theory to the analysis of many practical problems in the social, medical, and juridical fields as well as in the physical sciences including mortality, actuarial mathematics, insurance risks, the theory of errors, statistical mechanics and the drawing of statistical inferences.


In 1799 Laplace was appointed by Napoleon as Minister of the Interior but he was removed after only six weeks "because he brought the spirit of the infinitely small into the government".


He later provided the explanation of the anomaly between Newton's theoretical calculation of the speed of sound and the speeds actually measured.


1783 Henry Cort, owner of a forge in Portsmouth supplying Iron products to the British Navy, invented and patented a grooved rolling mill for producing wrought Iron bars and rods replacing the ancient method of hammering the bloom produced by the bloomery furnace. This reduced the processing time by over 90% and produced a much cheaper and better quality product.


In 1784 Cort also patented the reverberatory furnace and puddling, a new method of converting cast pig iron into low carbon content wrought iron to improve its quality and tensile strength. (The term "reverberation" was used at the time to describe "rebounding" or "reflecting", NOT "vibrating"). The reverberatory furnace was like a very large oven containing a coal fire which was isolated from a separate hearth containing the pig iron charge which was in turn contained in a "puddle" in the base of the hearth. The hot gases from the fire were directed over the top of the puddle heating it directly and also by reflected heat from the roof over the hearth. In this way poor quality fuel could be used without the risk of contaminating the Iron. It was a bit like a modern fan assisted oven used to cook a bowl of soup, with the oven door being opened from time to time to stir the soup, except on a much greater scale.

The puddle of molten pig iron was stirred manually with long rods by "puddlers" to promote oxidation or burning of the remaining Carbon in the Iron by the Oxygen in the hot air to form the wrought Iron and CO2 which was released. After the metal cooled and solidified, it was worked with a forge hammer and could be rolled into sheets, bars or rails. This was the method used to produce the wrought Iron used in the first Ironclad warships. It was also used for the small scale production of low-Carbon steels for swords, knives and weapons.


Cort's two inventions reduced the costs and increased the supply of better quality steel with fewer inclusions and a more homogeneous grain structure enabling its potential use in more widespread and new applications.


See also Iron and Steel Making.


1784 Cavendish demonstrated that water is produced when Hydrogen burns in air, thus proving that water is a compound of two gases and not an element and overturning over two thousand years of conventional wisdom.


1784 King Louis XVI of France set up a Royal Commission to evaluate the claims by German healer and specialist in diseases of the wealthy, Franz Anton Mesmer who had achieved international notoriety with his theory animal magnetism and its supposed therapeutic powers. Members of the committee included Benjamin Franklin, Antoine Lavoisier and the physician Joseph-Ignace Guillotin, inventor of the Guillotine which was later used to remove the heads of both Lavoisier and the King. Mesmer had claimed extraordinary powers to cure patients of various ailments by using magnets. He also claimed to be able to magnetise virtually anything including paper, wood, leather, water, even the patients themselves and that he himself was a source of animal magnetism, a magnetic personality. His clients were mainly aristocratic women many of whom reported pleasurable experiences as Mesmer moved his hands around their bodies to align the flow of magnetic fluid while they were in a trance. Mesmer was a patron of the composer Wolfgang Amadeus Mozart who included a scene in which Mesmer's magnets were used to revive victims of poisoning in the opera "Cosi fan tutte". The committee however concluded that all Mesmer's observed effects could be attributed to the power of suggestion and he was denounced as a fraud. He did however keep his head (the French revolution was still four years away) and his name lives on as hypnotists mesmerise their subjects.

Guillotin by the way was not a revolutionary. As a physician he merely proposed the guillotine as a more humane method of execution rather than hacking away with a sword.


1785 French military engineer and physicist, Charles-Augustin de Coulomb published the correct quantitative description of the force between electrical charges, the Inverse Square Law, which he verified using a sensitive torsion balance which he had invented in 1777 He showed that the electrical charge is on the surface of the charged body. Coulomb's Law was the first quantitative law in the history of electricity.

Coulomb also founded the science of friction.

The unit of charge is named the Coulomb in his honour.


1786 Luigi Galvani professor of anatomy at Bologna Academy of Science in Italy discovered that two dissimilar metals applied to the leg of a dead frog would make it twitch although he believed that the source of the electricity was in the frog. He was quite possibly influenced in his conclusions by the knowledge of Walsh's experiments with electric fish. He found Copper and Zinc to be very effective in making the muscles twitch. Could it be animal electricity?.


Galvani, a religious man, believed without question that the electricity was a God given property of the animal and that electrical fluid (electricity) was the "spark of life". On the other hand, his friend Volta more of a showman, influenced by "the enlightenment" and "rational thought" questioned religious dogma and believed that the electricity was man made and came from the metals. For many years a debate raged until it was eventually resolved by Volta's invention of the Voltaic pile. In the meantime Galvani lost his job for refusing to swear allegiance to Napoleon's Cisalpine Republic whereas Volta attempted to accommodate Napoleon and prospered under his rule. Sadly Galvani died in poverty in 1798 without knowing the outcome of the debate.


Galvani's experiments with frogs were repeated on a human specimen in 1803 by his nephew Giovani Aldini at the Royal College of Physicians in London, this time with a battery. He used the corpse of George Forster a convicted murderer, who had just been hanged, to demonstrate the phenomenon called Galvinism. He touched a pair of conducting rods, linked to a large voltaic pile, to various parts of Forster's body causing it to have spasms. When one rod was placed at the top of the spine and the other inserted into the rectum, the whole body convulsed and appeared to sit upright giving the illusion that electricity had the power of resurrection.

It is claimed that Aldini's demonstration was the inspiration for Mary Shelley's 1818 novel "Frankenstein" about a scientist who uses electricity to bring an inanimate body to life with disastrous consequences.


1787 Experiments by French physicist and chemist Jacques Charles (later continued by Joseph Louis Gay-Lussac) revealed that:

  • All gases expand or contract at the same rate with changes in temperature provided the pressure is unchanged.
  • The pressure of a fixed mass and fixed volume of a gas is directly proportional to the gas's temperature. Discovered by Gay Lussac in 1802, the effect (law) is now named after him.
  • The change in volume amounts to 1/273 of the original volume at 0°C for each Celsius degree the temperature is changed.

This work provided the inspiration for Kelvin's subsequent theories on thermodynamics.


Charles' Law and Gay Lussac's Law (1802) together with Boyle's Law (1662) and Avogadro's Law (1811) are known collectively as the Gas Laws.


Combining these laws into one relationship we get the Ideal Gas Law:

pV = nRT

where

p is the pressure

V is the volume

n is the number of moles associated with the volume

R is the universal gas constant

T is the temperature in degrees Kelvin

Note that P*V has the dimensions of Force*Distance and thus represents a measure of the energy in the system and the relationship implies that the energy in the system is proportional to the temperature and, for a given temperature and a given quantity of gas, the energy is constant no matter how the pressure and volume vary.


In his spare time, Charles was an enthusiastic balloonist making several ascents and improving ballooning equipment.


1787 John Fitch a skilled metalworker and American patriot, after being imprisoned by the British in the Revolutionary war, turned his energy to harnessing steam power. Early steam engines were too big and heavy to be used in practical road vehicles, however this restriction did not apply to large marine vessels which were big enough to accommodate them. Fitch built a 45 foot (13.7 M) steamboat propelled by six paddles on either side like an Indian canoe, following up in 1788 with a 60 foot (18 M) paddle wheeler with stern paddles which moved like ducks' feet. In 1790 he launched an even larger boat, with improved paddle wheels more like modern designs, which operated a regular passengers service on the Delaware river but with few passengers it operated at a loss and his financial backers pulled out. He obtained a French patent for his invention in 1795 but attempts to build a business in Europe also failed.

Undue credit for the invention of the steamboat is often given to Robert Fulton who repeated Fitch's work twenty years later, building and successfully operating steamboats on the Hudson River.


See more about Steam Engines.


1789 French chemist Antoine Laurent Lavoisier considered to be the founder of modern chemical science, published Traité Élémentaire de Chimie or "Elementary Treatise of Chemistry", the first modern chemistry textbook. In it he presented a unified view of new theories of chemistry and a clear statement of the Law of Conservation of Mass, which he had established in 1772, that is; "In a chemical reaction, matter is neither created nor destroyed".

In addition, he defined elements as substances which could not be broken down further and listed all known elements at the time including Oxygen, Nitrogen, Hydrogen, Phosphorus, Mercury, Zinc, and Sulphur. As intended, it did for chemistry what Newton's Principia had done for physics one hundred years earlier.


Lavoisier was the first to apply rigorous scientific method to chemistry. He carried out his experiments on chemical reactions with meticulous precision devising closed systems to ensure that all the products of the reactions were measured and accounted for. He thus demolished the wild ideas of the alchemists as well as the Greek concept of four elements, earth, air, fire and water which had been accepted for over 2000 years.


Lavoisier had a wide range of interests and a prodigious appetite for work and funded his experiments from his part time job as a tax collector. He was aided in his scientific endeavours by his wife Marie-Anne Pierrette Paulze, whom he had married when she was only thirteen years old. The couple were at the centre of a Parisian social life, but in 1794 Lavoisier's tax collecting activities fell foul of France's revolutionary mob and he was Guillotined during the Reign of Terror. An appeal to spare his life was cut short by the judge with the words "The Republic has no need of scientists".

Afterwards the French mathematician Joseph-Louis Lagrange said "It took them only an instant to cut off that head, and a hundred years may not produce another like it".


See also Lavoisier's relationship with Rumford


1790 The first patent laws established un the USA by a group led by Thomas Jefferson. Until US Independence, when Intellectual Property Rights were protected by the American Constitution, the King of England officially owned the intellectual property created by the colonists. Patents had however been issued by the colonial governments and were protected by British law.

The first US patent was granted to Samuel Hopkins of Vermont for a new method of making Potash.


1791 German chemist and mathematician Jeremias Benjamin Richter attempted to prove that chemistry could be explained by mathematical relationships. He showed that such a relationship applied when acids and bases neutralize to produce salts they do so in fixed proportions. Thus he was the first to establish the basis of quantitative chemical analysis which he named stoichiometry. He died of tuberculosis at the age of 45.


1791 English mining engineer John Barber patented a gas turbine engine. His patent, "A Specification of an Engine for using Inflammable Air for the purposes of procuring Motion and facilitating Metallurgical Operations.....and any other Motion that may be required.", outlined the operating principle and thermodynamic cycle of the engine which contained all the essential features of the modern gas turbine. The fuel used was coal gas. Fuel and air were compressed by two separate reciprocating piston pumps, chain driven from the turbine shaft, and then fed into a combustion chamber where the fuel was burned. The expanding combustion gases were then directed through a nozzle onto an impulse turbine wheel driving the output shaft.

Performance was unfortunately limited by the materials technology of the day and losses in the compression stage which reduced the available output power. Barber had a solution to alleviate these problems. He geared a water pump to the output shaft which injected a small stream of cold water into the hot combustion gases to cool the combustion chamber and the impulse wheel. This had the dual benefit in that the resulting steam increased the density of the jet impinging on the turbine wheel and thus increased the power output.

He also envisaged using the output jet from the engine to power a boat through water.


1792 Scottish engineer and inventor William Murdoch employed by Boulton and Watt to supervise their pumping engines in Cornwall was the first to make practical use of coal gas. By heating coal in a closed iron retort with a hollow pipe attached he produced a steady stream of coal gas for lighting his house.


Coal gas was one of the byproducts of pyrolysis or the destructive distillation of coal was already used to produce coke which was used in metallurgical processes to extract metals from their ores. At first the public were not interested in Murdoch's application due to health and safety fears and his employers discouraged him from patenting the idea so he left the company in 1797 to exploit it himself. When others showed interest in commercialising coal gas Boulton and Watt realised their mistake and Murdoch was invited back the following year. Boulton and Watt subsequently became major players in the gas business selling integrated illumination systems with their own self contained gas generators. Coal gas lighting was eventually patented in 1804 by German inventor Friedrich Albrecht Winzer (Frederick Albert Winsor) who pioneered the installation in Britain of public gas lighting and gas distribution systems fed from large central gas works.


The production of coke and coal gas left huge residues of coal tar which were initially regarded as mostly waste. It was another 50 years before Perkin showed how considerable value could be extracted from this waste.


1794 American law graduate and inventor Eli Whitney patented the cotton gin ("gin" derived form "engine") which automated the process of separating cottonseed from raw cotton fibres. It was about 50 times faster than the previous method of processing the cotton by hand and revolutionised cotton production in the United States, work which had formerly been done by slaves. His cotton engine consisted of a box in which was mounted a revolving cylinder with spiked teeth, or wire hooks, which pulled the cotton fibre through small slotted openings thus separating the seeds from the lint. A separate rotating brush, operated from the main drum via a belt and pulleys, removed the loose fibrous cotton lint from the projecting spikes or hooks. Early devices were powered by a hand crank but these were soon replaced by larger horse-drawn or water powered machines.

Paradoxically, the introduction of the cotton gin as a labour saving device did not reduce the demand for slave labour. Because cotton could be produced much more cheaply, the demand increased, more cotton was planted and cotton replaced tobacco and indigo as cash crops so that many more slaves were required to grow the cotton and harvest the fields. Some people claim that by increasing the demand for slave labour, the introduction of the cotton gin was one of the causes of the American Civil War (1861-186).


Despite the success of the cotton gin, it was quickly copied many times over and Whitney spent much of his money on legal battles over patent infringements.

In 1798 Whitney also pioneered the use of interchangeable parts in the production of muskets which proved to be more commercially successful.


1795 The hydraulic press used for lifting heavy weights or for the presses used in metal forming was patented by English engineer Joseph Bramah. The principle on which it depends was first outlined by Pascal 150 years earlier but not turned into practical products.

Bramah also invented a "burglar proof" lock, which remained unpicked for sixty-seven years and examples are still in use today. The secret of the lock was the precision to which it was made.


1796 - The First Vaccination

Vaccination was one of the world's most important medical discoveries and is still the only method yet devised to prevent the onset of an infectious disease.

The rationale for vaccination began in 1796 when the English doctor Edward Jenner who ran a medical practice in the small rural town of Berkeley in Gloucestershire noticed that milkmaids who had been infected by cowpox were not usually infected by smallpox. He guessed that exposure to cowpox could be used to protect against smallpox. To test his theory, he took material from a cowpox sore on a milkmaid's hand and inoculated it into the arm of an 8 year-old boy. Months later, Jenner exposed the boy several times to the smallpox variola virus, but the boy never developed smallpox confirming Jenner's hypothesis. (See details below)


Smallpox History

Smallpox was one of history's deadliest diseases which blighted the world for thousands of years killing many millions of people. During the 1700s it is estimated to have taken 400,000 lives each year in Europe alone. It was a terrible disease which attacked the small blood vessels in the skin, the linings of the body organs including the stomach and intestines, the mouth and other body orifices causing these membranes to bleed and disintegrate.

This highly contagious smallpox virus was easy to transfer from person to person because it was airborne and breathing in just a tiny amount was all it took to become infected. The first symptoms began with a high fever, headache, backache, vomiting and delirium. It was followed on the third or fourth day by red spots all over the skin on the face, the body and the limbs, changing in a few days to pustules (blisters filled with pus). The death rate for those contracting smallpox was between 20% and 40%. If the patient survived, scabs would form and fall off over the next few weeks leaving disfiguring, pitted scars that would remain indefinitely.

Smallpox was also called "variola" from the Latin "varius" meaning "speckled". The name pox referred to a wide spectrum of diseases, characterised by skin eruptions or sacs which leave pitted pock marks (or pockes), ranging from the relatively mild acne through smallpox, cowpox, chicken pox and others to syphilis (greatpox).


Around 910, Islamic physician al-Razi published "A Treatise on Smallpox and Measles" explaining how to distinguish smallpox from other pustule forming diseases and recorded that smallpox spread from person to person and that survivors did not develop it again. Unfortunately his words did not generate interest in the West.


Smallpox was known in China over 3000 years ago and they knew that survivors had a lifelong resistance to reinfection. During the innovative Ming Dynasty (1368-1644) they developed a method, later called "variolation", of preventing the disease by introducing a small quantity of the causative agent of the disease into the body in order to induce immunity. More generally this method is called inoculation. Chinese medical practicioners used ground up smallpox scabs which were blown through a tube into the right nasal passage of males and the left passage of females. Alternatively pus was taken from the donor's blisters and kept for a few weeks to "detoxify" it, then it was absorbed into cotton wool balls that were inserted into the patient's nose. A mild form of smallpox usually developed which gave the the patients immunity from the disease. It was however "hit or miss" since the precise dosage was not known and there was a significant risk that a slightly excessive dose could prove fatal.

In 1700, two reports on the Chinese practice of variolation were received by Fellows of the Royal Society in London; one sent by an employee of the East India Company, stationed in China, to Dr. Martin Lister, a specialist in spiders and crustacians, who received the report outlining the procedure and exhorting him to implement the treatment in England but the plea was ignored. A second similar report received the same year by another English physician, Dr. Clopton Havers who specialised in the structure of bones, suffered the same fate. The topic did not appear to match the particular interests of either of these physicians.


Further news of the use of variolation was brought to Europe in the early 18th century with the arrival of travellers from Constantinople (now Istanbul). In 1714, the Royal Society of London received more letters, this time from Turkish physicians, Emanuel Timoni and Giacomo Pilarino, describing the technique of variolation as practiced in Constantinople. The method consisted of taking a live sample of the smallpox virus contained in the pus taken from a smallpox blister of a person suffering from a mild case of smallpox and introducing it into the scratched skin of the arm or leg of the healthy person who had not yet been attacked by the disease. These reports however, did not change the ways of the conservative Britlish physicians.

The use of variolation in Britain however picked up after its acceptance was actively promoted by Lady Mary Wortley Montagu, the wife of the British ambassador appointed to the Ottoman Empire in 1716 who had herself previously contracted smallpox and fortunately survived but was severely pock marked. She heard from Timoni about variolation in Turkey and at his suggestion, in 1718 she arranged for her five year old son to undergo the procedure in the traditional way by "healers" (they were, as she wrote in her correspondence, one of "a set of old women") supervised by British Embassy surgeon Dr. Charles Maitland. Later, back in England when a smallpox epidemic struck in 1721, she also had her daughter inoculated by Maitland using the same method. To spread the good news about the possibilities variolation and its success in treating her children, she launched a publicity campaign with newspaper reporters and invited three members of the Royal College of Physicians to examine her daughter and they in turn persuaded Sir Hans Sloan, president of the college to support variolation. She also secured royal patronage by persuading Princess Caroline of Wales, the wife of the future King George II, to do the same but not until Dr. Maitland had proved its effectiveness on an orphan and seven condemned criminals from Newgate prison who were given the choice of the gallows, or submitting as subjects of Lady Montagu's smallpox experiments, a common practice in those days. They chose the variolation trials and survived with nothing worse than their inoculation scars and their freedom.

As a result variolation was gradually introduced in Britain. Initially some surgeons required patients to undergo six weeks of preparation to "cleanse their systems" prior to variolation and were therefore bled and kept on a reduced diet and vigorously purged. There was no justifyable medical rationale for this procedure which severely weakened the patients before their treatment and after 30 years the practice was abandoned. The adoption of variolation was however still limited by the unacceptably high death rate of the patients since up to 12% of healthy patients who had been variolated died as a result of their inoculation compared to as many as 40% who died when they contracted the disease naturally.


Edward Jenner and Vaccination

The eighth of nine children Edward Jenner was orphaned at the age of five and was raised by his older siblings who sent him to a free boarding school in 1757 when he was eight years old. That year the school was hit by a terrible smallpox epidemic and all pupils who had not yet undergone variolation were required to do so. Jenner was subjected to the obligatory six weeks of bleeding, fasting and purging which left him very weak and afraid, before his inoculation. This was followed by his confinement with other desperately ill children who had contracted smallpox which increased his trauma. This extremely unpleasant and distressing experience left the young boy with lasting psychlogical scars including severe anxiety, nightmares and hallucinations. At the age of thirteen he decided to be a physician and was apprenticed for six years to a local country surgeon while he gained his qualifications.


After qualifying, like any other doctor of the time, Jenner was aware of variolation and carried it out to protect his patients from smallpox. In the eighteenth century however medical practitioners were not aware of the workings of the body's immune system and from the early days of his career, Jenner had been intrigued by country-lore which said that people who caught cowpox from their cows could not catch smallpox. He suspected that a weakened relative of the smallpox agent which he called a virus conferred protection against an infection by a disease causing microbe. Unfortunately Jenner had no explanation for why this method worked since no-one could see the tiny virus with the microscopes of the time.

This, and his own experience of variolation as a boy and the knowledge of the risks that accompanied it led him to undertake the most important research of his life.


Similar to smallpox, cowpox was a disease spread by direct contact with bodily fluids or shared objects such as clothing. Though it had smallpox-like, but much milder, symptoms such as lesions that affected the udders and teats of cows, it was relatively harmless. It was known to infect milkmaids who caught cowpox from their cows, but it was not deadly and the milkmaids were not unduly troubled by the disease. Although they felt rather off-colour for a few days and developed one or a small number of pocks, usually on their hands, but sometimes on their faces, these pocks ulcerated and formed black scabs before healing on their own leaving the milkmaids immune to future infection by both cowpox and smallpox.


In May 1796 a dairymaid, Sarah Nelmes, consulted Jenner about a rash on her hand. He diagnosed cowpox rather than smallpox and Sarah confirmed that one of her cows had recently had cowpox. Jenner realised that this was his opportunity to test the protective properties of cowpox by giving it to someone who had not yet suffered smallpox.

He chose James Phipps, the eight-year old son of his gardener. On 14th May he made a few scratches on one of James' arms and rubbed into them some material from one of the pocks on Sarah's hand. A few days later James became mildly ill with cowpox but was well again a week later. So Jenner knew that cowpox could pass from person to person as well as from cow to person. The next step was to test whether the cowpox would now protect James from smallpox. On 1st July Jenner variolated the boy with smallpox. As he anticipated, and undoubtedly to his great relief, there were no adverse reactions and James did not develop smallpox, either on this occasion or on the many subsequent occasions when his immunity was tested again, confirming that the method was both safe and effective.


As a result of his first success, Jenner urged fellow physicians to try the inoculation but was disappointed by their lack of interest. Convinced of the benefits and practicality of using cowpox to induce immunisation from the deadly smallpox, he vowed to redouble his efforts to gain its acceptance.

He repeated the tests with eight children including his own son Robert and seven children of labourers and workhouse inmates. Except for Robert who unexpectedly did not react, the other seven all reacted positively confirming once more the success of the cowpox inoculation. Jenner subsequently variolated his son as a safety precaution when a local outbreak of smallpox occured.

Two months later, Jenner submitted a report about this new development to the Royal Society for publication. Although it was supported by the society's reviewers, it was rejected by the President, Sir Joseph Banks, on the basis that it was at variance with established knowledge, it did not contain enough evidence of its effectiveness, the experimental sample was too small, more cases were needed and in addition, it diminished Jenner's own credibility. Jenner resolved to persevere and get more cases and he was supported by friends who advised him to publish privately which he did two years later in a small 75 page book.

In 1798, he independently published the suggested book with the unusually long title "An Inquiry into the Effects of the Variolae Vaccinae, a disease discovered in some of the Western Counties of England, particularly Gloucestershire, and known by the name Cowpox". In it he described his treatment of 23 more patients by first vaccinating them with cowpox material and then later contaminating them with samples of smallpox. He noted that after receiving the cowpox vaccine the patients did not become infected by the smallpox and suggested that the pox was caused by a "virus" which was a word commonly used at the time for poison.

Jenner's publication was an early example of a science based practice carrying out clinical trials to verify a theory even though the risks to the subjects would not pass muster today. It was a milestone in the history of medicine since for the first time Jenner had developed a safe method to prevent rather than treat an infectious disease. Unlike variolation it was safe. Nobody ever died from it and no one was disfigured by wretched cowpox scars and what's more, the temporary cowpox infection induced in the patient was itself not contageous.

This is what makes vaccines such powerful medicines. Unlike most medicines, which treat or cure diseases, vaccines prevent them.


In 1801, Jenner published his treatise "On the Origin of the Vaccine Inoculation". In this work, he summarised his discoveries and expressed hope that "the annihilation of the smallpox, the most dreadful scourge of the human species, must be the final result of this practice"

By the same year an estimated 100,000 people had been vaccinated using the same method.


Recognition - A Sad Story

Jenner was one of the world's great scientists. Unfortunately at the time, his momentous discovery did not bring him the recognition or gratitude that he deserved or might have expected. Soon after the publication of his original book, British surgeons began vaccinating people and as word spread about its effectiveness, this practice was soon adopted in the British Empire, followed quickly in the rest of the world. Despite this success however, there were still numerous detractors in the medical profession claiming that they had evidence that it didn't work, to which Jenner responded that some of their vaccines must have been contaminated with smallpox. But this did not help to quell the criticism as the critics continued to call cowpox - "cow's syphilis", claiming that vaccination was "cowmania" and that it inflicted animal diseases on humans and could turn humans into animals, that patients developed hairy animal mange and deformed ox heads with rashes all over their bodies and like syphilis, it also affected the brain.

To make matters worse, the general public became aware of these adverse opinions causing healthy patients to be fearful of the risk that they could be infected with the disease by the vaccine itself. Consequently, fewer people were vaccinated and more were variolated with the result that more than 8000 people died of smallpox in London in 1805. Ultimately, vaccination became widely accepted and gradually replaced the practice of variolation.

On one side Jenner suffered personal attacks and ridicule from his critics while on the other side, even amongst those who advocated the benefits of vaccination there were significant numbers who dishonestly claimed credit for discovering the successful vaccine, while other careless physicians had an unacceptably high failure rate for their vaccinations.


Jenner ran up substantial debts of over £12,000, an enormous sum in those days, in pursuing his research, promoting its acceptance, defending it against detractors and providing free consultation to sceptics. All of this kept him away from his modest medical practice which provided most of his income and required him to spend an inordinate amount of time in London. This left him fearful of a spell in the debtor's prison.

Fortunately, the British parliament eventually granted him £10,000 in 1802 and a further £20,000 in 1807 in recognition of his work. In addition he received a donation of over £7,000 from grateful citizens in several Indian cities.


All of these troubles and the abuse that he had experienced were compounded by current family misfortunes. In 1810, Jenner's eldest son Edward died of tuberculosis and his sister Mary died falling downstairs sending him into a deep depression. In 1812, his second sister Anne died from a stroke and during the same period his wife became incapacitated, bedridden and isolated with tuberculosis and arthritis and died the following year leaving him with an acute feeling of loneliness. As a consequence his mental abilities began to decline as also did the quality of his work and he started to experience nightmares, hallucinations and horrific, fearful memories of his own childhood variolation. Jenner consoled himself with brandy and opium and after a period of ill health he died of a stroke in 1823.


For over 200 years Jenner's vaccination was the only method of immunising against smallpox and even in the 20th century, an estimated 300 million people around the world died from smallpox. However in 1979, after an extensive vaccination programme, the World Health Organisation (WHO) declared that smallpox had been eradicated and no cases of naturally occurring smallpox have occurred since. But vaccines don't just apply to smallpox, new vaccines have been developed to treat a wide variety of diseases. Currently there are four major classes of vaccines each with several subclasses which operate on different principles to create suitable vaccines for particular types of diseases including those for which no naturally occuring vaccines are available. Vaccines from these classes are in turn customised to protect against a range of specific diseases and other ingredients may be added to improve their safety or effectiveness. See also Pasteur. According to the WHO, the number of different diseases controlable by vaccines is around 30 and they prevent 2 to 3 million deaths every year around the world.

The history of smallpox holds a unique place in medicine. It was one of the deadliest diseases known to humans, and the only human disease to have been completely eradicated by vaccination.


Notes


A Virus is a small collection of genetic code, either DNA or RNA, contained within an envelope of protein cells. Viruses are extremely small, about 10 to 100 times smaller than the smallest bacteria. They cannot replicate themselves independently like bacteria. They must first attach themselves to and infect their host's cells and use components of these cells to make copies of themselves, often killing the infected host cell in the process and causing damage to the host organism. The surface of every virus is covered with molecules, generally fragments of protein or carbohydrates, called antigens that give it a unique set of harmful characteristics. These antigens on the surface of the virus identify it as a foreign invader to the immune system.

The antigens on the surface of pathogenic cells are different from those on the surface of the body's cells. This enables the body's immune system to distinguish pathogens (disease-causing organisms) from cells that are part of the body. Antigens are also found on the surface of foreign materials like pollen, pet hairs and house dust where they can be responsible for triggering hay-fever or asthma attacks.

There are millions of different antigens and different virus types have been found everywhere on earth outnumbering bacteria by 10 to 1. Because viruses are not quite living creatures like bacteria, they cannot be killed by antibiotics. Only antiviral medications or vaccines can eliminate or reduce the severity of viral diseases, including AIDS, COVID-19, measles and smallpox.


Viral Infections and the Immune System

The body has many ways of defending itself against pathogens. The first lines of defence are physical barriers such as skin, mucus, and hairs that prevent pathogens from entering the body and keep the airways clear. If a pathogen gets through all the barriers to infection, a second line of defence is activated. These are the white blood cells, also called leucocytes, of the immune system whose function is to protect the body from both infectious disease and foreign invaders. Each antigen has a unique shape that can be recognised by the immune system's white blood cells which then produce corresponding antibodies to mesh with the shape of the antigens causing them to be neutralised or destroyed.


How Vaccines Work

A Vaccine stimulates the immune system to produce special proteins called antibodies, exactly like it would if the body was exposed to the disease. After getting vaccinated, the body develops immunity to that disease, without having to get the disease first.

Vaccines contain the same or similar virus as the virus that causes the disease. (For example, measles vaccine contains measles virus) but they have been either reduced in volume or weakened to the point that they don't make you sick. Some vaccines contain only a part of the disease germ.

The word "vaccine", coined by Jenner in 1796, is derived from the Latin "vacca" (a cow) and became the name for the cowpox virus and inoculation with cowpox became known as "vaccination". Later the word vaccine became used more generally to describe any substances used to stimulate the production of antibodies and provide immunity against one or several diseases. Vaccines are prepared from the causative agent of a disease, its products, or a synthetic substitute, treated to act as a mild antigen without inducing the disease. Because vaccines are designed to react to particular antigens, they are disease specific.


Vaccination involves introducing into the living host, an attenuated (weakened) form of the virus whose antigens provoke the host's immune system into producing the corresponding antibodies to fight the infection. These antibodies bind themselves to modified cells which persist in the host's bloodstream as so called "memory cells". The next time the host encounters the disease its body already has antibodies to fight it and so will either suffer a very mild form of it or not suffer at all.


1797 Young Prussian noble Alexander von Humboldt published a book outlining his theories about Galvanic electricity and his experiments to support them. He believed that the electricity came from the muscle and was intensified by the electrodes and he carried out experiments on plants and animals to prove it. He also carried out numerous experiments on himself to gather more data using a Leyden jar to inflict severe shocks on his body until it was badly lacerated and scarred. He was mortified three years later when his theories were proved completely wrong by Volta and turned his attention instead to geology, botany and exploration in all of which he found international fame but no fortune.


1797 English engineer Henry Maudslay introduced the precision screw-cutting lathe. Although lathes had been in use from before 3000 B.C. when the Egyptians used the bow lathe for wood turning, Maudslay's lathe was the first true ancestor of the modern machine tools industry.


Maudslay began his career in 1789 as a blacksmith, making machinery for Joseph Bramah, but he progressed to the more precision work required for Bramah's hydraulic and lock making systems when he opened his own business. His first major contract was to make the manufacturing equipment used in Mark Isambard Brunel's block making plant.


He recognised the importance of having an accurate reference plane for marking out, for inspection and for setting out tooling and assemblies to be used as a baseline for all measurements of the work piece. He introduced and championed the use of a solid high precision surface plate, usually made of cast iron for this purpose. He deivised the method of creating these extremely flat surfaces and introduced the use of engineer's blue to aid in this process. The process needs three sets of plates worked together to achieve the necessary degree of flatness. A thin coating of engineer's blue, slightly more sticky than marking blue, is applied to one of the plates and the plate is then rubbed against a second plate. Imperfections are indicated in the areas where the blue has been rubbed off one plate and transferred to the other. Originally these imperfections were corrected by grinding off the high spots but this was superceded by scraping. This process is repeated several times with all three plates until the plates are flat. The third plate is necessary to avoid creating matching pairs of concave and convex plates.

Engineer's blue is also used more generally to identify any high spots or contact between mating pieces. Marking blue, slightly thinner, is used for marking out surfaces in preparation for scribing or drilling.


Maudslay raised the standards of precision, fits, finishes and metrology and invented the first bench micrometer capable of measuring to one ten thousandth of an inch (0.0001 in ≈ 3 µm)which he called the "Lord Chancellor" because it resolved disputes about the accuracy of workmanship in his factory.


His pupils included Scottish engineer James Nasmyth who designed and made heavy machine tools, including the shaper and the steam hammer, for the ship building and railway industries, English engineer Joseph Whitworth who introduced the Whitworth Standard for screw threads and designed the Whitworth rifle and Richard Roberts, inventor of the first practical power loom, the self-acting spinning mule and various machine tools including gear cutting machines. See also Whitney - next.


1798 In an age when mechanical devices were individually made and laboriously fitted by hand, American engineer Eli Whitney pioneered the concept of interchangeable parts in the USA, using precision manufacturing made possible by more accurate machine tools just becoming available. Prior to that, if a part failed, a replacement part had to be made and fitted individually creating major problems and losses in battlefield conditions. Whitney's methods also reduced the skill levels needed to manufacture and assemble the parts enabling him to take on a contract to supply 10,000 muskets in two years to the US government. Whitney also built a rudimentary milling machine in 1818 for use in firearms manufacturing, but the universal milling machine as we would recognise it today was invented by American engineer Joseph Rogers Brown in 1862. Brown's machine was able to cut the flutes in twist drills. See also Whitworth's method of making twist drills which it replaced.


In 1794 Whitney also invented the cotton gin which revolutionised the processing of raw cotton.


1799 Count Rumford, man of science, inventor, administrator, philanthropist, self publicist and scoundrel, born Benjamin Thompson in the USA, founded The Royal Institution in London to promote and disseminate the new found knowledge of the industrial revolution. Its first director was a well connected, glamorous young Cornish chemist, Humphry Davy. Davy was a great showman, but did not consider "common mechanics" worthy of his brilliance, so the Institution rapidly evolved to presenting lectures for the wealthy, who paid to attend. In Rumford's original plan, there had been a back door through which the poor could access a balcony to hear the lectures from a distance for free. Davy had it bricked up. He had recently discovered that inhaling Nitrous oxide (N2O) gas produced euphoric effects which made him laugh, a property that led to its recreational use. He called it "laughing gas" and invited his friends to laughing gas parties. Noting that it also acted as a pain-killer, it was subsequently used as a general anaesthetic.

  • Apart from an exclusive social club, the Royal Institution did however perform a very valuable function in that it was a subsidised science lab, one of the very few in the world, which enabled scientists of the day, such as Michael Faraday, to make many important discoveries.

Davy's initial experiments were done by dissolving zinc in nitric acid but he later found that he could obtain pure nitrous oxide simply by pyrolysis (heating) of dry ammonium nitrate with the reaction NH4NO3 → N2O + 2H2O. This made the new anaesthetic more readily available, less expensive and less addictive than the opium and laudanum used at the time.


Rumford was also a colourful character, like fellow American Benjamin Franklin, a man of many talents. Raised in pre-Revolutionary New England, at the age of 19 he married a wealthy 31-year-old widow and he took up spying on the colonies for the British but left for England in 1776 when he was found out, deserting his wife and daughter. At first he worked in the British foreign office as undersecretary for Colonial Affairs and was knighted by George III after a stint in the army fighting on the British side in the American War of Independence. He moved on to Munich where he carried out public and military works for the Elector of Bavaria being rewarded in 1792 with the title Count of the Holy Roman Empire. Among his inventions were the drip coffee pot and thermal underwear.


His interest in field artillery led him to study both the boring and firing of cannons. Out of this work he saw that mechanical power could be converted to heat -- that there was a direct equivalence between thermal energy and mechanical work. Heat was produced by friction in unlimited quantities so long as the work continued. It could therefore not be a fluid called a Caloric flowing in and out of a substance as his adversary, the noted French chemist, Antoine Lavoisier, had proposed, since the fluid would have a finite quantity.


After Lavoisier's death Rumford started a four year affair with his wealthy, young widow, however after a short unhappy marriage they divorced with Rumford remarking that Lavoisier was lucky to have been Guillotined. Rumford lived out the rest of his life in Lavoisier's former house in France engaged in scientific studies and it is claimed that he was paid by the French for spying on the British.


1799 English aristocrat, engineer and polymath, George Cayley, one hundred years before the Wright brothers, outlined the concept of the modern aeroplane as a fixed-wing flying machine with separate systems for lift, propulsion, and control. He was the first to understand the underlying principles and to identify the four basic aerodynamic forces of flight, namely weight, lift, drag, and thrust, which act on any flying vehicle.

Unfortunately there would be no suitable power sources available for many years to realise such a design, but he applied his theories to the design of gliders and made the first successful glider to carry a human being.

Throughout time, countless philosophers and experimenters had been fascinated by the flight of birds and the shape of their wings, however Cayley was the first to undertake a methodical study of the shape and cross section of wings and it is to him that we owe the idea of the curved aerofoils used in modern aircraft designs.


His theories and designs were based on models he had tested on a "whirling-arm apparatus" he had built to simulate airflow over the wings and to measure the drag on objects at different speeds and angles of attack. It had the same functions as a modern wind tunnel but instead, it was based on an earlier design by Smeaton which enabled models to be passed at high speed in a circular path through the still air. Balance springs were used to measure the forces on the model.

From his researches, he showed that a curved aerofoil produces significantly more lift than a simple flat plate. He also identified the need for aerodynamic controls to maintain stability in flight and was the first to design an elevator and a rudder for that purpose.


Cayley's paper "On Aerial Navigation", published in 1810, was the first scientific work about aviation and the theory of flight and marked the birth of the science of aeronautics.

See more about Aerofoils and Theories of Flight.


Cayley is remembered for his ground breaking work on aerodynamics and aeronautics however he was also a prolific inventor and has been called by some "the English Leonardo" though there are other candidates for this accolade (see Hooke) and some of his sketches for ornithopters and vertical takeoff aircraft are reminiscent of Leonardo's drawings. The following are some of his other activities and inventions.

  • In 1800 he presented to parliament a comprehensive plan he had devised for land reclamation and flood control.
  • His early work between 1804 and 1805 centred on ballistics. He designed artillery shells with fins which imparted a rotating movement of the shell about the direction of travel which in turn increased their range and later he introduced shells with explosive caps which increased their destructive power.
  • In 1807 he published a paper on the Hot Air Engine and started a series of experiments to improve its performance. The ideas were picked up by Robert Stirling who made his own improvements and patented the engine in 1816.
  • Also in 1807 he described a reciprocating engine fuelled by gunpowder. It consisted of two pistons connected in line and connected to one of them was an external tube into which a fixed amount of gunpowder was automatically fed with each cycle. A constantly burning flame at the end of the tube ignited the gunpowder and the gas generated, together with the expansion of the air in the second piston due to the heat of the explosion, forced the pistons to the top of their stroke. The pistons were returned to the start position by means of a stout bowspring. The engine did not produce rotary motion. There is no record of it having been built and the idea was abandoned as being too unreliable.
  • In his quest for a lightweight undercarriage for his gliders, Cayley turned his attention in 1808 to the wheels. For centuries wheels had been made with stout wooden spokes to support the weight of the vehicle exerted through the axle bearing down on the spokes. The spokes themselves had to be strong enough to support this compressive load so that wheels were generally very heavy. Cayley turned the problem on its head. Instead of spokes in compression, he designed a wheel in which the axle was suspended from the rim of the wheel by slender wire spokes in tension. The magnitude of the force was the same but a wire under tension can accommodate much higher forces than a shaft of wood under compression. This lightweight wheel was the forerunner of the modern bicycle wheel. Cayley thus re-invented the wheel.
  • Another of his inventions was the caterpillar track which he patented in 1825 shortly after Stephenson ran his first railway service, now used in tanks and earth moving equipment. It was an attempt to free steam trains from their dependence on the fixed itinerary determined by the railway lines so that they could deviate down untracked roads. He called it the "Universal Railway".
  • He experimented with light, heat and electricity and in 1828 he estimated absolute zero temperature to be -480°F about 11.44°C lower than the 273.15°C confirmed by Kelvin in 1848.
  • Cayley gave a lot of attention to the safety on the new railway systems crisscrossing the country. His first idea, published in 1831, after the first fatal railway accident at the 1829 Rainhill trials when the unfortunate William Huskisson was run over by Stephenson's Rocket, was a "Cow Catcher" though this was never introduced in Britain. At the same time he examined operating procedures and recommended that speed limits and driver training should be introduced. He also proposed the introduction of automatic braking systems and designed a braking system for that purpose. To reduce injuries in case of accidents he designed a compressed air buffer truck to be incorporated into the trains and recommended that passengers should wear seat belts and that the walls of the carriages should be covered with padded cushions (air bags?). In 1841 he also proposed new operating procedures coupled with a method of automatic signalling he designed to ensure that no two trains could ever meet on the same tracks.
  • He also campaigned for the compulsory introduction of self-righting lifeboats following designs by William Wouldhave in 1789 and earlier proposals in 1785 by Lionel Lukin.
  • Following a fire at London's Covent Garden Theatre in 1808 which twenty three firemen were killed, Cayley proposed the design of a new theatre which incorporated many of the features which are included in modern fire regulations such as safety curtains, large outward opening doors, a large reservoir of water and a pumps to direct it onto the fire. His proposal was not accepted and 47 years later its replacement, built in the classical Athenian style, was burnt to the ground.
  • Prompted by a friend who had lost his hand, in 1845 he designed a prosthetic hand with spring movements which enabled it to grip and pick up objects. At the time there were few concessions by the government or society to disabled people and amputees merely had a hook in place of their hand. Cayley's idea was considered too expensive and fell on stony ground.
  • In 1849, Cayley produced a small biplane glider in which a 10 year old boy made a short test flight. It was the world's first "heavier than air flying machine" to carry a human being. He followed up in 1853, at the age of 79, with a full scale glider which carried his reluctant coachman across Brompton Dale in Yorkshire.
  • In his spare time he was also a Member of Parliament, representing Scarborough.

Cayley had strong views that people should not profit in any way from human suffering and did not patent any of the ideas relating to safety or disability.


1800

VOLTA Inventor of the Battery

Alessandro Volta

The man who started it all.

Voltaic Pile - The First Battery

Volta's Pile

Alessandro Volta of the University of Pavia, Italy, describes the principle of the electrochemical battery in a letter to the Royal Society in London. The first device to produce continuous electric current. He had been interested in electrical phenomena since 1763 and in 1775 he had made his own electrophorus for carrying out his experiments. He was a friend of Galvani but disagreed with him about the nature of electricity. Galvani's experiments with frogs had led him to believe that the source of the electricity was the frog, however Volta sought to prove that the electricity came from outside of the frog, in his case from the dissimilar metals used to probe the specimen.

His "Voltaic Pile" was initially presented in 1800 as an "artificial electric organ" to demonstrate that the electricity was independent of the frog. It was constructed from pairs of dissimilar metals zinc and silver separated by a fibrous diaphragm (Cardboard?) moistened with sodium hydroxide or brine and provided the world's first continuous electric current. The pile produced a voltage of between one and two volts. To produce a higher voltages he connected several piles together with metal strips to form a "battery". He was the first to understand the importance of "closing the circuit".

Volta's invention caused great excitement at the time and he gave many demonstrations including drawing sparks from the pile, melting a steel wire (the first fuse?), discharging an electric pistol and decomposing water into its elements. Though little more than a curiosity at first, the ability to deliver electric energy on demand was an important development contributing to the Industrial Revolution.

Napoleon was particularly impressed, insisting on helping with the demonstrations when he was present and showering Volta with honours despite the fact that France and Italy were initially at war with each other. The unit of electric potential was named the Volt in his honour.


After the invention of the battery, Volta was awarded a pension by Napoleon and he began to devote more of his time to politics, holding various public offices. He retired in 1819 and died in 1827 and although the battery was a sensation in scientific circles and giving impetus to an intensification of scientific investigation and discovery throughout the nineteenth century, surprisingly Volta himself never participated in these opportunities.


1800 English scientists, William Nicholson and Anthony Carlisle, experimenting with Volta's chemical battery, accidentally discovered electrolysis, the process in which an electric current produces a chemical reaction, and initiated the science of electrochemistry. (A discovery like many others claimed by Humphry Davy though he did actually do original work at a later date on electrolysis).

This new technique, made possible by the availability of the constant electric current provided by the new found batteries, enabled many compounds to be separated into their constituent elements and led to the discovery and isolation of many previously unknown chemical elements. Electrolysis, "loosening with electricity", thus became widely used by scientific experimenters.


1800 German born, English astronomer, Frederick William Herschel in an experiment to measure the heat content of the various colours in the visible light spectrum, placed a thermometer in the spectral patches of coloured light. He discovered that not only did the temperature rise as he approached the low frequency, red end of the spectrum, but the temperature continued to rise beyond the red colour even though there were no visible light rays there. The conclusion was that the energy spectrum of the Sun's light was wider than that visible to the naked eye. The long wave radiation below the red end of the spectrum was named infra red radiation.


1801 After Herschel's discovery of radiation below the red end of the light spectrum (See above), German physicist, Johann Wilhelm Ritter, explored the short wave region above the violet end of the spectrum. Using the phenomenon discovered by Scheele, that the colourless salt, Silver chloride is turned black by light rays from the violet end of the spectrum, he showed that higher frequency rays from above the violet radiation also caused strong blackening of the silver salt. This higher frequency energy was named ultra violet radiation.


1801 French silk-weaver, Joseph-Marie Jacquard invented an automatic loom using punched cards to control the weaving of the patterns in the fabrics. This was not the earliest implementation of a stored program and the use of punched cards programmed to control a manufacturing process as is often claimed. That honour goes to Bouchon starting in 75 years earlier and improved by Falcon in 1728 and eventually refined by de Vaucanson in 1744. Jacquard presented his invention in Paris in 1804, and was awarded a medal and patent for his design by the French government who consequently claimed the loom to be public property, paying Jacquard a small royalty and a pension. Its introduction caused riots in the streets by workers fearing for their jobs.

Despite the loom's fame, Jacquard's principles of programmed control and automation were not applied to any other manufacturing process for another 145 years when Parsons produced the first numerically controlled machine tools.


1801 Frenchman Nicholas Gautherot observed that Copper plates could drive a current back in the opposite direction. He had inadvertently discovered the rechargeable battery but did not realise its significance. Sixty years later Planté repeated the experiment with Lead plates and the Lead Acid battery was born.


1802 English chemist Dr William Cruikshank designed the first battery capable of mass production. A flooded cell battery constructed from sheets of Copper and Zinc in a wooden box filled with brine or acid.


Cruikshank also discovered the electrodeposition of Copper on the cathodes of Copper based electrolytic cells and was able to extract metals from their solutions, the basis modern metal refining and of electroplating, but it was not until 1840 that the commercial potential of the plating process was realised by the Elkingtons.


1802 British chemist William Hyde Wollaston discovered dark lines in the optical spectrum of sunlight which were subsequently investigated in more detail and catalogued by Fraunhofer in 1814.


Wollaston also investigated the optical properties of quartz crystals and discovered that they rotate the plane of polarisation of a linearly polarised light beam travelling along the crystal optic axis. He applied this property in his invention of the Wollaston prism in which he used two crystal prisms mounted back to back to separate randomly polarised or unpolarised light into two orthogonal, linearly polarized beams which exit the prism in diverging directions determined by the wavelength of the light and the angle and length of the prism. Wollaston prisms are used in polarimeters and also in Compact Disc player optics.


Wollaston was also active as a chemist. He discovered the element Palladium in 1803 and Rhodium the following year and in 1816 he invented improvements to the battery. His attempts to invent an electric motor were less successful however bringing him into conflict with Michael Faraday.


1803 Ritter first demonstrated the elements of a rechargeable battery made from layered discs of Copper and cardboard soaked in brine. Unfortunately there was no practical way to recharge it other than from a Voltaic Pile and for many years they remained a laboratory curiosity until someone invented a charger. Ritter was one of the first to identify the phenomenon of polarisation in acidic cells. He also repeated Galvani's "frog" experiments with progressively higher voltages on his own body. This was probably the cause of his untimely death at the age of 33.


1803 John Dalton a Quaker school teacher working in Manchester resurrects the Greek Democritus' atomic theory that every element is made up from tiny identical particles called atoms, each with a characteristic mass, which can neither be created or destroyed. Dalton showed that elements combine in definite proportions and developed the first list of atomic weights which he first published in 1803 at the Manchester Literary and Philosophical Society and at greater length in book form in 1808.


In 1801 Dalton also formulated the empirical Law of Partial Pressures, now considered to be one of the Gas laws. It states that in a mixture of ideal gases the total pressure is equal to the sum of the partial pressures of each individual component in a gas mixture. In other words, each gas has a partial pressure which is the pressure which the gas would have if it alone occupied the volume. Besides its concentration, the partial pressure of the gas in a gas mixture has a major effect in determining its physical and chemical reaction rates.

For an example of the application of the Law of Partial Pressures see Refrigeration.


1804 The Electric telegraph one of the first attempted applications of the new electric battery technology was proposed by Catalan scientist Francisco Salvá. One wire was used for each letter of the alphabet and each number. The presence of a signal was indicated by a stream of hydrogen bubbles when the telegraph wire was immersed in acid. The system had a range of one kilometer.


1804 Mining engineer Richard Trevithick, known as the Cornish Giant, built the Pen-y-Darren steam engine, the first locomotive to run on flanged cast iron rails. It hauled 10 tons of iron and 70 men on 5 wagons from Pen-y-Darren to Abercynon in Wales on the Merthyr Tydfil tramroad, normally used for horse drawn traffic, at a speed of 2.4 mph (3.9 km/h) thus disproving the commonly held theory that using smooth driving wheels on smooth rails would not allow sufficient traction for pulling heavy loads. (See Trevithick's Pen-y-Darren Locomotive)


Trevithick's locomotive incorporated several radical innovations. He did not use the steam engine with a separate condenser recently invented by James Watt, the most efficient technology of the day, partly to circumvent the onerous conditions of the Boulton and Watt patent, but also because Watt's engines were too heavy and bulky for mobile use. Instead, to achieve greater efficiencies in a smaller, lighter engine he used a high pressure system with the power stroke being produced by high pressure steam on the piston rather than atmospheric pressure as in Watt's engine.

Higher pressure systems exposed weaknesses in current boiler designs which Trevithick overcame by using a cylindrical construction which was inherently stronger and could withstand much higher pressures and this became the pattern for all subsequent steam engines.

He did however use one of Watt's other innovations, the double acting piston, in which a sliding valve coupled to the piston enabled the steam to be applied alternately to each surface of the piston providing a power stroke in both the forward and back motions of the piston. (See Double Acting Piston).

To improve combustion efficiency he replaced the conventional method of producing steam in which an external flame was used to heat the water in a separate kettle or boiler, by using instead, a return flue boiler in which a U shaped, internal fire tube flue passed through the water boiler and bent back on itself to increase the surface area heating the water. Efficiency was further improved by directing the exhaust steam from the driving piston up the chimney to increase the air draft through the boiler fire. Known as the "blast pipe", this latter steam release is what gave steam engines their characteristic puffing sound.

Together, these innovations provided a 10 fold increase in efficiency over Watt's engine and all of these ideas were subsequently used by George Stephenson on his Rocket locomotive.


Converting the reciprocating motion of the piston to rotary motion for driving the wheels was however was the Achilles heal of this particular engine being overly complicated. The single horizontal piston was located centrally above the boiler and the linear motion of the piston was transferred through a connecting beam perpendicular to the piston to two connecting rods or cranks, one on either side of the boiler. On one side the crank drove a large flywheel to smooth the motion and on the opposite side of the boiler the crank turned a spur gear mounted on the same shaft as the flywheel. The drive from this input gear was transferred via a large intermediate gear to spur gears mounted on the two drive wheels on the same side of the engine. There was no drive to the two wheels on the opposite side of the vehicle.


Trevithick was a larger than life character, bursting with ingenious ideas but unsuccessful in converting them into profitable business. Between 1811 and 1827 he spent time working on steam engines used in Peruvian Silver mines and exploring South America on his way back. After a perilous journey he arrived penniless in Cartagena in Colombia where by amazing coincidence he met Robert Stephenson, whom he had known as a child, who paid his passage home.


See more about Steam Engines.


1805 Italian chemist Luigi Valentino Brugnatelli, friend of Volta demonstrated electroplating by coating a silver medal with gold. He made the medal the cathode in a solution of a salt of gold, and used a plate of gold for the anode. Current was supplied by a Voltaic pile. Brugnatelli's work was however rebuffed by Napoleon Bonaparte which discouraged him from continuing his work on electroplating.

The process later became widely used for rust proofing and for providing decorative coatings on cheaper metals. Gold plating is used extensively today in the electronics industry to provide low resistance, hard wearing, corrosion proof connectors.


1807 English physician, physicist, and Egyptologist Thomas Young introduced a measure of the stiffness or elasticity of a material, now called Young's Modulus which relates the deformation of a solid to the force applied. Also called the Modulus of elasticity it can be thought of as the spring constant for solids. Young's modulus is a fundamental property of the material. It enables Hooke's spring constant, and thus the energy stored in the spring to be calculated from a knowledge of the elasticity of the spring material.

Young was the first to assign the term kinetic energy to the quantity ½MV2 and to define work done, as force X distance which is also equivalent to energy, an extension to Newton's Laws but surprisingly taking 140 years to emerge. More surprising still is that it was another 44 years before the concept of potential energy was proposed.


He also did valuable work on optical theory and in 1801 he devised the Double Slit Interference experiment which verified the wave nature of light. He directed a light source through a slit in a plate and observed a broad strip of light on a screen a short distance behind the plate. Repeating the experiment with two parallel slits, the light passing through, and spreading from, the slits and illuminating the screen appeared as a series of bright and dark parallel bands on the screen. The slight difference in the light path lengths to the screen via the two separate slits results in a phase shift between the two emerging light beams which creates constructive and destructive interference between the light waves passing through the different slits when they are recombined. This interference pattern thus confirmed the wave nature of light. See diagram of Young's Double Slit Experiment.

But see also Taylor's demonstration of the Corpuscular Nature of Light.


Young is considered by some to be the last person to know everything there was to know. (Not the only candidate to this fame). He was a child prodigy and had read through the Bible twice by the age of four and was reading and writing Latin at six. By the time he was 14 he had a knowledge of at least five languages, and eventually his repertoire grew to 12. He practiced medicine until the work load clashed with his other interests, and among his many accomplishments he translated the inscriptions on the Rosetta Stone which was they key which enabled hieroglyphics to be deciphered.


1807 Humphry Davy constructed the largest battery ever built at the time, with over 250 cells, and passed a strong electric current through solutions of various compounds suspected of containing undiscovered elements isolating Potassium and Sodium by this electrolytic method, followed in 1808 with the isolation of Calcium, Strontium, Barium, and Magnesium. The following year Davy used his batteries to create an arc lamp.

In 1810 Davy was credited with the isolation of Chlorine, already discovered by Scheele in 1773.


In 1813 Davy wrote to the Royal Society stating that he had identified a new element which he called Iodine, four days after a similar announcement by Gay-Lussac. The element had in fact been isolated in 1811 from the ashes of burnt seaweed by Bernard Courtois, the son of a French saltpetre manufacturer, who had passed samples to Gay-Lussac and Ampère for investigation. Ampère in turn passed a sample to Davy. Although Courtois discovery was not disputed, both Davy and Gay-Lussac claimed credit for identifying the element.


1807 Robert Fulton a prolific American inventor is most remembered for building the Claremont steamboat which successfully plied the Hudson River in 1807 steaming between New York and Albany in 32 hours with an average speed of 5 miles per hour. He had earlier built a steamboat based on John Fitch's design which operated on the Seine in Paris in 1803. Where Fitch succeeded technically but failed commercially, Fulton made a commercial success of Fitch's technology and is unduly remembered as the inventor of the steamboat.


See Napoleon's judgement of the idea.

See more about Steam Engines

.

1807 As a result of his studies on heat propagation, French mathematician Baron JeanBaptiste Joseph Fourier presented a paper to the Institut de France on the use of simple sinusoids to represent temperature distributions. The paper also claimed that any continuous periodic signal could be represented as the sum of properly chosen sinusoidal waves.


For the previous fifty years the great mathematicians of the day had sought equations to describe the vibration of a taut string anchored at both ends as well as the related problem of the propagation of sound through an elastic medium. French mathematicians Jean-Baptiste le Rond d'Alembert and Joseph-Louis Lagrange and Swiss Leonhard Euler and Daniel Bernoulli had already proposed combinations of sinusoids to represent these physical phenomena and in Germany, Carl Friedrich Gauss had also been working on similar ways to analyse mechanical oscillations (see below). Whereas their theories applied to particular situations, Fourier's claim was controversial in that it extended the theory to any continuous periodic waveform.

Among the reviewers of Fourier's paper were Lagrange, Adrien-Marie Legendre and Pierre Simon de Laplace, some of history's most famous mathematicians. While Laplace and the other reviewers voted to publish the paper, Lagrange demurred, insisting that signals with abrupt transitions or "corners", such as square waves could not be represented by smooth sinusoids. The Institut de France bowed to the prestige of Lagrange, and rejected Fourier's work. It was only after Lagrange died that the paper was finally published, some 15 years later.


When Fourier's paper was eventually published in 1822, it was restated and expanded as "Theorie Analytique de la Chaleur", the mathematical theory of heat conduction. The study made important breakthroughs in two areas. In the study of heat flow, Fourier showed that the rate of heat transfer is proportional to the temperature gradient, a new concept at the time, now known as Fourier's Law.


Of greater importance however were the mathematical techniques Fourier developed to calculate the heat flow in unusually shaped objects. He provided the mathematical proof to support his 1807 claim that any repetitive waveform can be approximated by a series of sine and cosine functions, the coefficients of which we now call the Fourier Series. These coefficients represent the magnitudes of the different frequency components which make up the original signal. When the sine and cosine waves of the appropriate frequencies are multiplied by their corresponding coefficients and then added together, the original signal waveform is exactly reconstructed. Thus complex functions such as differential equations can be converted into simpler trigonometric terms which are easier to handle mathematically by calculus or other methods.


This mathematical technique is known as the Fourier transform and its application to an electrical signal or mechanical wave is analogous to the splitting or "dispersion" of a light beam by a prism into the familiar coloured optical spectrum of the light source. An optical spectrum consists of bands of colour corresponding to the various wavelengths (and hence different frequencies) of light waves emitted by the source. In the same way, applying the Fourier transform to an electrical signal separates it into its spectrum of different frequency components, often called harmonics, which makes it very useful in electrical engineering applications.


Fourier showed that the harmonic content of a square wave can be represented by an infinite series of harmonics approximated by the expression:

            ∞

f(t) =        1 sin (nωt)    Where ω is the pulse repetition frequency.

           n=1    n

High frequency harmonics are required to construct the sharp pulse transitions of the square wave so that a high bandwidth is required to transmit a pulsed waveform without distortion. In practice, 10 to 15 times the fundamental frequency of the bit rate provides enough bandwidth to transmit a recognisable square wave. Thus to transmit a 1 kHz square wave would require a channel bandwidth of at least 10 kHz.


In electrical engineering applications, the Fourier transform takes a time series representation of a complex waveform and converts it into a frequency spectrum. That is, it transforms a function in the time domain into a series in the frequency domain, thus decomposing a waveform into harmonics of different frequencies, a process which was formerly called harmonic analysis.


The Fourier Transform has wide ranging applications in many branches of science and while many contributed to the field, Fourier is honoured for his insight into the practical usefulness of the mathematical techniques involved.


Fourier led an exciting life. He was a supporter of the Revolution in France but opposed the Reign of Terror which followed bringing him into conflict and danger from both sides. In 1798 he accompanied Napoleon on his invasion of Egypt as scientific advisor but was abandoned there when Nelson destroyed the French fleet in the Battle of the Nile. Back in France he later provoked Napoleon's ire by pledging his loyalty to the king after Napoleon's abdication and the fall of Paris to the European coalition forces in 1814. When Napoleon escaped from Elba in 1815 Fourier once more feared for his life. His fears were unfounded however and, despite his disloyalty, Napoleon awarded him a pension but it was never paid since Napoleon was defeated at Waterloo later that year.


As noted above Fourier was not the only one at the time looking for simple solutions to complex mathematical problems. Gauss was trying to calculate the trajectories of the asteroids Pallas and Juno. He knew that they were complex repetitive functions but he only had sampled data of the locations at particular points in time rather than a continuous time varying function from which to construct a mathematical model of the trajectories. Although this was before Fourier's time, like his contemporaries Gauss was aware that the result should be a series of sinusoids, but deriving a transform from sampled or discrete data, rather than from a time varying mathematical function, involves a huge computational task. Such a transform applied to sampled data is now known as a Discrete Fourier Transform (DFT) and can be considered as a digital tool whereas the general Fourier Transform only applies to continuous functions and can be considered as an analogue tool. In 1805 Gauss derived a mathematical short cut for computing the coefficients of his transform. Although he applied it to a specific, rather than a general case, we would recognise Gauss's short cut today as the Fast Fourier Transform (FFT) even though it owed nothing to Fourier.


1808 Prolific Swedish chemist Jöns Jacob Berzelius working at the University of Uppsala in Sweden formulated the Law of Definite Proportions (discovered by Dalton five years earlier and by Richter twelve years before that) which establishes that the elements of inorganic compounds are bound together in definite proportions by weight. Berzelius developed the system of chemical notation we still use today in which the elements were given simple written labels, such as O for Oxygen, or Fe for Iron, and proportions were noted with numbers. He accurately determined the relative atomic and molecular masses of over 2000 elements and compounds.


1808 Fearing for his life, French civil and marine engineer, architect and royalist, Mark Isambard Brunel, fled from France in 1793 at the start of the "Reign of Terror" which followed the French Revolution after the execution of King Louis XVI. Settling in New York and taking American citizenship, he became the City's Chief Engineer with friends in high places including Alexander Hamilton, one of the U.S. founding fathers. Hearing from one of Hamilton's guests that the Britain's Royal Navy required 100,000 wooden pulley blocks per year as part of their war effort and were looking for a better method of manufacturing them, Brunel saw it as an opportunity to use his engineering talents in a venture too good to miss. Encouraged by Hamilton who saw Brunel's antipathy towards Napoleon as a way to hamper the French, he left the U.S. for England in 1799 with a letter of introduction from Hamilton to Lord Spencer, the British Navy Minister.

After winning a contract to manufacture 60,000 wooden pulley blocks per year, Brunel designed and set up one of the first ever mass production lines which went live in 1808. Instead of one man making a complete pulley Brunel divided the work into a series of simple, short cycle, repetitive tasks and using 43 custom designed, precision machines from Henry Maudslay to carry out the sequential operations in line. In this way he reduced the labour required to do the work from 110 men to 10. A formula which has become an industry standard.


See also Brunel's Thames Tunnel


1809 At a demonstration at the Royal Institution, Humphry Davy amazed the attendees by producing an electric arc between two Carbon electrodes - the first electric light and the first demonstration of the useful application of electricity. It was no longer just a curiosity. The demonstration marked the start of a new era, the era of electricity.

Davy is generally credited with inventing the Carbon arc lamp, however a Russian Vasilli V. Petrov had reported this phenomenon in 1803.


He also carried out extensive investigations of nitrous oxide (laughing gas), some might say a little too extensive, often with his friends, after which he reported on its effects and recommended its use as a pain killer.


In 1816 Davy also claimed the credit for the invention of the miner's safety lamp, named the "Davy lamp" in his honour but it was actually similar to a design already demonstrated in 1815 by self-taught railway pioneer George Stephenson. The privileged Davy was incensed that he could be upstaged by working class Stephenson.


According to J. D. Bernal's "Science in History" Davy is quoted as saying "The unequal division of property and of labour, the difference of rank and position amongst mankind, are the sources of power in civilized life, its moving causes, and even its very soul."

Davy died prematurely in 1829 at the age of 50, it is said like Scheele, from inhaling many of the gases he discovered or investigated.


See also Davy and the Royal Institution


1810


1811 Italian physicist Amadeo Avogadro discovered the concept of molecules. He hypothesized that equal volumes of gases at the same temperature and pressure contain equal numbers of molecules. From this hypothesis it followed that relative molecular weights of any two gases are the same as the ratio of the densities of the two gases under the same conditions of temperature and pressure.

This relationship called Avogadro's Hypothesis or Avogadro's Law, now considered as one of the Gas Laws, can be expressed as:

V1 / n1 = V2 / n2

where V is the volume of the gas and n is the number of molecules it contains.


The concept of a mole is a useful measure of the number of "elementary entities" (usually molecules or atoms, but also ions or electrons) contained in a system. See definition of a mole.

The number of "elementary entities" in one mole has been defined as Avogadro's constant or Avogadro's number. It's value was not determined until 1905 by Einstein in his doctorial dissertation.


Note that Avogadro's Number NA divided by by the atomic mass of an element gives the number of atoms of that element in one gram.

Thus Uranium-235 contains 6.022 X 1023 / 235 = about 2.563 X 1021 atoms per gram.


The basic scheme of atoms and molecules arrived at by Dalton and Avogadro underpins all modern chemistry.


1812 German physician Samuel Thomas von Sömmering increased the range of Salvás (1804) telegraph to three kilometers by using bigger (higher voltage) batteries, a method subsequently used with disastrous results on the Transatlantic Telegraph Cable.


1812 Venetian priest and physicist Giuseppe Zamboni developed the first leak proof high voltage "dry" batteries with terminal voltages of over 2000 Volts. They consisted of thousands of small metallic foil discs of tin or an alloy of Copper and Zinc called "tombacco", separated by paper discs stacked in glass tubes. The technology was not well understood at the time and while Zamboni consciously avoided the use of any conventional corrosive aqueous electrolyte in the cells, hence the name "dry" battery, the electrolyte was actually provided by the humidity in the paper discs and a variety of experimental greasy acidic pulps spread thinly on the foils to minimise polarisation effects. Although the battery voltage was very high, the internal resistance was thousands of megohms so the current drawn from the batteries was about 10-9 amps, limiting the battery's potential applications. One notable application however was a primitive electrostatic clock mechanism in which a pendulum was attracted towards the high voltage terminal of a Zamboni pile by the electrostatic force between the pendulum and the terminal. When the pendulum touched the terminal it acquired the same charge as the terminal and was consequently deflected away from it towards the opposite pole of another similar pile from which, by a similar mechanism it was deflected back again, thus maintaining the oscillation. The current drain or discharge rate of the batteries was so low as to be undetectable with instruments available at the time and it was thought that the pendulum was a "perpetual electromotor". In fact Zamboni primary batteries have been known to last for over 50 years before becoming completely discharged!


1813 French mathematician and physicist Siméon Denis Poisson derived the relationship which relates the electric potential in a static electric field to the charge density which gives rise to it. The resulting electric field is equal to the gradient of the potential. This equation describes the electric fields which drive the flow of charged ions through the electrolyte in a battery.

Poisson published many papers during his lifetime but he is perhaps best remembered for his 1837 paper on the statistical distribution now named after him. The Poisson distribution describes the probability that a random event will occur in a time or space interval under the conditions that the probability of the event occurring is very small, but the number of trials is very large so that the event actually occurs only a small number of times. He used his theory to predict the likelihood of being killed by being kicked by a horse and tested it against French army records over several years of the number of soldiers killed in this way. Apart from analysing accident data, the distribution is fundamental to queuing theory which is used in traffic studies to dimension applications from supermarket checkouts and tollgates to telephone exchanges.


1814 German physicist Joseph von Fraunhofer identified and catalogued a series of 570 dark lines, first noticed by Wollaston in 1802, corresponding to specific wavelengths in the visible light spectrum from cool vapours surrounding the Sun.

In 1859 Kirchhoff and Bunsen began a systematic investigation of these lines and deduced that the dark lines were caused by absorption of radiation by specific elements in the upper layers of the Sun's atmosphere. Comparing these lines with the light spectrum emitted by individual elements on Earth enabled them to identify the elements present in the Sun.


1816 A two wire telegraph system based on high voltage static electricity activating pith balls, using synchronous clockwork dials at each end of the channel to identify the letters, was demonstrated in the UK by Francis Ronalds, an English cheese maker and experimental chemist, and subsequently described in his publication of 1823. Coming only a year after Wellington's victory over Napoleon at Waterloo, it was turned down by the haughty Admiralty, who had just invented semaphore signalling, with the comment "Telegraphs of any kind are now wholly unnecessary". It was an invention before its time and nobody showed any interest. At the time it was however witnessed by the young Charles Wheatstone who was later credited in the UK with the invention of the telegraph.


1816 William Wollaston built the forerunner of the reserve battery. To avoid strong acids eating away the expensive metal plates of his batteries or cells when not in use, he simply hoisted the plates out of the electrolyte, a system copied by many battery makers in the nineteenth century.


1816 Scottish clergyman, Dr. Robert Stirling patented the Stirling Engine a Hot Air external combustion engine first proposed by George Cayley in 1807. Key to the design was an "economiser", now called a regenerator, which improved the thermal efficiency. The first practical engine of this type, it was used in 1818 for pumping water in a quarry. The thermodynamic operating principle, later named the Stirling cycle in his honour, is still the basis of modern Stirling engine applications.


1819 French physicists Pierre Louis Dulong and Alexis Thérèse Petit formulated the law named after them that "The atoms of all simple bodies have exactly the same capacity for heat." In quantitative terms the law is stated as - The specific heat capacity of a crystal (measured in Joules per degree Kelvin per kilogram) depends on the lattice structure and is equal to 3R/M, where R is the gas constant (measured in Joules per degree Kelvin per mole) and M is the molar mass (measured in kilograms per mole). In other words, the dimensionless heat capacity is equal to 3.

Dulong and Petit's Law proved useful in determining atomic weights.


1819 Moses Rogers captain of the passenger ship the SS Savannah converted it from a three masted sailing ship to a paddle steamer by installing a 90 horse power steam engine in it. More a hybrid than a steamship, it was 98 feet long with a displacement of 320 tons. Its fuel storage capacity was very low since the main propulsion was intended to be by the sails with the paddle wheels only coming into use when the wind speed was too low. The paddle wheels were 16 feet (4.9 m) in diameter and unusually, they could be stored on deck when the ship was under sail. A steam ship was such a rare sight that when people saw the ship under steam they thought it was on fire. The captain was unable to pursuade any travellers to risk their lives on the steamer's first Atlantic crossing which consequently took place as an experimental voyage without passengers.

In 1819 it crossed the Atlantic from Savannah to Liverpool in 29 days and 11 hours, entering the record books as the first steam ship to make the transatlantic crossing, but the engine was used for only a total of about 80 hours during the journey. The return trip was made under sail in rough weather and took 40 days.


1820 Danish physicist Hans Christian Øersted showed how a wire carrying an electric current can cause a nearby compass needle to move. The first demonstration of the connection between magnetism and electricity and of the existence of a hitherto unknown, non-Newtonian force. Two major scientific discoveries from a simple experiment.


1820 One week after hearing about Øersted's experiment, French physicist and mathematician André-Marie Ampère showed that parallel wires carrying current in the same direction attract eachother, whereas parallel wires carrying current in opposite directions repel eachother.

He also showed that the force of attraction or repulsion is directly proportional to the strength of the current and inversely proportional to the square of the distance between the wires.

He precisely defined the concept of electric potential distinguishing it from electric current. He later went on to develop the relationship between electric currents and magnetic fields.


Ampère's life was not a happy one. Traumatised by his father's execution by the guillotine during the French Revolution, there followed two disastrous marriages, the first one resulting in the early death of his wife. Finally he had to cope with an alcoholic daughter. The epitaph he choose for his gravestone says Tandem Felix ('Happy at last'). The unit of current was named the Ampère in his honour.


1820 French mathematician Jean-Baptiste Biot, together with compatriot Felix Savart , discovered that the intensity of the magnetic field set up by a current flowing through a wire varies inversely with the distance from the wire. This is now known as Biot-Savart's Law and is fundamental to modern electromagnetic theory. They considered magnetism to be a fundamental property rather than taking Ampére's approach which treated magnetism as derived from electric circuits.


1820 Johann Salomo Christoph Schweigger professor of mathematics, chemist and classics scholar at the University of Halle, Germany built the first instrument for measuring the strength and direction of electric current. He named it the "Galvanometer" in honour of Luigi Galvani rather than a "Schweiggermeter"???. Galvani was in fact unaware of the concepts of current flows and magnetic fields.


1820 Dominique François Jean Arago in France demonstrated the first electromagnet, using an electric current to magnetise an iron rod.


1820 American chemist Robert Hare developed high current galvanic batteries by using spiral wound electrodes to increase the surface area of the plates in order to achieve the high current levels used in his combustion experiments. He also used such batteries in 1831 to enable blasting under water.

Hare also developed an apparatus he called the Spiritoscope, designed to detect fraud by Spiritualist mediums, and in the process of testing his machine, he became a Spiritualist convert and eventually became one of the best known Spiritualists in the USA.


1821 Prussian physicist Thomas Johann Seebeck discovered accidentally that a voltage existed between the two ends of a metal bar when one end was cooled and the other heated. This is a thermoelectric effect in which the potential difference depends on the existence of junctions between dissimilar metals (in this case, the bar and the connecting wire used to detect the voltage). Now called the Seebeck effect, it is the basis of the direct conversion of heat into electricity and the thermocouple. See also the Peltier effect discovered 13 years later which is the reverse of the Seebeck effect.

Batteries based on the Seebeck effect were introduced by Clamond in 1874 and NASA in 1961.


1821 The English scientist Michael Faraday was the first to conceive the idea of a magnetic field which he demonstrated with the distribution pattern of Iron filings showing lines of force around a magnet. Prior to that, electrical and magnetic forces of attraction and repulsion had been thought to be due to some form of action at a distance.


In 1821 Faraday made the first electric motor. It was a simple model that demonstrated the principles involved. See diagram. Current was passed through a wire that was suspended into a bath of Mercury in the centre of which was a vertical bar magnet. The Mercury completed the circuit between the battery and the wire. The current interacting with the magnetic field of the magnet caused the wire to rotate in a circular path around the magnetic pole of the magnet. This was the first time that electrical energy had been transformed into kinetic energy. In 1837 Davenport made the first practical motor but it did not achieve commercial success and for forty years after Faraday's original invention the motor remained a laboratory curiosity with many weird and wonderful designs. Typical examples are those of Barlow (1822) and Jedlik (1828).


This invention was the source of a bitter controversy with Humphry Davy and William Hyde Wollaston, recently President of the Royal Society, who had tried unsuccessfully to make an electric motor. Faraday was unjustly accused of using Wollaston's ideas without acknowledging his contribution. The upshot was that Faraday withdrew from working on electromagnetics for ten years concentrating instead on chemical research.


Consequently it was not until 1831 that Faraday invented a generator or dynamo to drive the motor. Surprisingly nobody else in the intervening ten years thought of it either. Faraday had shown that passing a current through a conductor in a magnetic field would cause the conductor to move through the field but nobody at the time thought of reversing the process and moving the conductor through the field (or conversely moving a magnet through a coil) to create (induce) a current in the conductor.

In an ideal electrical machine, the energy conversion from electrical to mechanical is reversible. Applying a voltage to the terminals of a motor causes the shaft to rotate. Conversely rotating the shaft causes a voltage to appear at the terminals, thus acting as a generator. It was not until 1867 that the idea of a reversible machine occurred to Werner Siemens and practical motor-generators were not realised until 1873 by Gramme and Fontaine.


Faraday, the Father of Electrical Engineering, was the son of a blacksmith. A humble man with no formal education, he started his career as an apprentice bookbinder. Inspired by the texts in the books with which he worked and with tickets given to him by a satisfied customer, he attended lectures in 1812 given by the renowned chemist, Sir Humphry Davy, at the Royal Institution. At each lecture Faraday took copious amounts of notes, which he later wrote up, bound and presented to Davy. As a consequence Faraday was taken on by Davy as an assistant for lower pay than he received in his bookbinding job. During his years with Davy he carried out much original work in chemical research including the isolation new hydrocarbons but despite his achievements he was treated as a servant by Davy's wife and by Davy who became increasingly jealous of Faraday's success. Davy also opposed Faraday's 1824 application for fellow of Royal Society when he himself was president.


Faraday went on to eclipse his mentor discovering electrical induction, inventing the electric motor, the transformer, the generator and the variable capacitor and making major contributions in the fields of chemistry and the theoretical basis of electrical machines, (See Faraday's Law), electrochemistry , magneto-optics and capacitors. His inventions and theories were key developments in the Industrial Revolution, providing the foundations of the modern electrical industry, but Faraday never commercialised any of his ideas concentrating more on teaching. He was perhaps the greatest experimenter of his time and although he lacked mathematical skills, he more than made up for it with his profound intuition and understanding of the underlying scientific principles involved which he was able to convey to others. He used his public lectures to explain and popularise science, a tradition still carried on in his name by the IEE today.

Although he was noted for his many inventions, Faraday never applied for a patent.

In 1864 he was offered the presidency of the Royal Institution which he declined.


Not so well known is his relationship with Ada Lovelace who idolised him and pursued him over a period of several months in 1844 writing flattering and suggestive letters to which he replied, however in the end he did not succumb to her charms.


When the British Prime Minister asked of Faraday about a new discovery, "What good is it?", Faraday replied, "What good is a new-born baby?"


Saint Michael? - Among Victorian scientists and experimenters, Faraday is revered for his high moral and ethical standards. A deeply religious man, he overcame adversity to become one of the nineteenth century's greatest scientists and an inspiring teacher commanding admiration and respect, but he was not entirely beyond criticism. In 1844 a massive explosion in the coal mine of the small Durham mining village of Haswell killed 95 men and boys, some as young as 10 years old: - most of the male population of the village. The mine owners would accept no responsibility for the disaster and the coroner refused to allow any independent assessor to enter the mine. Incensed, the local villagers took their grievance all the way to the Prime Minister, Sir Robert Peel. Such was the national concern that Peel dispatched two eminent scientists to investigate, Faraday the "government chemist" and Sir Charles Lyell the "government geologist". Their verdict was "Accidental death" and, pressurised by the coroner, they added "No blame should be attached to anyone". In the days before social security, the consequences of this verdict were destitution for the bereaved families.

Faraday's biographers who mention the Haswell mining disaster usually only recount the story that Faraday conducted the proceedings while seated on a sack which, unknown to him, was filled with gunpowder.


1822 English mathematician Peter Barlow built an electric motor driven by continuous current. It used a solid toothed disc mounted between the poles of a magnet with the teeth dipping into a mercury bath, similar in principle to the Faraday disk. Applying a voltage between the shaft and the mercury caused the disc to rotate, the contact with the moving teeth was provided by the mercury.


1822 Probably Britain's greatest engineer, Isambard Kingdom Brunel was sent to France in 1820 at the age of 14 by his father, Mark Isambard Brunel, to acquire a more thorough academic grounding in engineering and to serve an apprenticeship with master horologist and instrument maker Abraham Louis Breguet. Returning in 1822 the 16 year old took up his first job working in his father's drawing office which at the time was preparing the plans for the Thames Tunnel.


In his lifetime Isambard Brunel designed and built 25 Railways, over 100 bridges and tunnels, 3 ships, 8 docks and a pre-fabricated field hospital.

He thought big. Inspired, rather than deterred, by the seemingly impossible, his projects were audacious in scale and ambition, taking engineering way beyond the boundaries of what conventional wisdom believed to be possible with the technology of the day, setting new limits which were not matched by others for decades. A great all round engineer, he turned his hand to architectural, civil, mechanical and naval projects contributing to every detail of the designs. Nor was he afraid to get his hands dirty, helping out the men working on his projects with their manual work when necessary.


Brunel's aspirations may have had no limits, however there was a price to pay for this ambition. He had a prodigious capacity for work and would often be engaged in a number of major projects at any one time, but the actual fulfillment of his projects was carried out by contractors whom he hired and these contractors were frequently driven beyond their limits.

Though his engineering achievements were truly heroic they were not always accompanied by commercial success for his clients and engineering success was often tarnished by unrealistic expectations, aborted projects, missed deadlines, cost over-runs, accidents and in the worst cases, lives lost, and when things went wrong the contractors usually got the blame.


The following are just some of Brunel's achievements:


The Tunnels

  • 1825 - 1843 Thames Tunnel
  • Working for his father on the Thames Tunnel was Brunel's first job. A very difficult project. Previous attempts by Richard Trevithick and others to tunnel beneath the Thames had failed and subsequent formal investigations had judged such a construction to be impracticable. But Brunel and his father persevered despite enormous difficulties and proved the sceptics wrong. See Thames Tunnel.

    It was an experience which gave the young Isambard the confidence to take on many more "impossible" projects over in his subsequent career.


  • 1836 - 1841 Box Tunnel
  • The route for Brunel's Great Western Railway (See below) was designed to follow the most direct route minimising curves and inclines. This necessitated building a tunnel 1.83 miles (2,937 m) long through Box Hill in Wiltshire. At the time,it was the longest railway tunnel in the world.

    Though easier than the Thames Tunnel, the project was not without its difficulties. To speed the construction, work was carried out simultaneously on six separate isolated tunnel sections beneath the hill. They were essentially closed underground chambers until they were able to link up to the adjacent chambers as the excavation of the tunnel progressed. Access to these chambers for the workmen and for removing the excavated earth and rock was through the ventilation shafts,which were up to 290 feet (88 m) deep. Horses at the surface powered the hoists used for this purpose.

    Working conditions were very hazardous. Blasting through the rock in the underground chambers took place with the workmen present and consumed 1 ton of explosives per week. Illumination was by candle light and much of the work was done with pick and shovel. Water ingress had been underestimated and water often gushed from fissures in the limestone strata and from time to time emergency evacuations of the workmen were necessary.

    The project was completed in 1841, one year late and cost the lives of 100 workers.


The Bridges

Though Brunel designed over 100 bridges for his railway projects he did not follow a standard pattern. When the opportunity, or necessity, arose he came up with some striking and unique designs. The three examples which follow are perhaps his best known. All three are still in use today carrying modern day traffic.

(See pictures of Brunel's Bridges)

  • 1831 - 1864 Clifton Suspension Bridge
  • While convalescing in 1928 from his accident in the Thames Tunnel, Brunel, at the age of 24, submitted a design for his first major project on his own, independent of his father. It was in response to a public tender for a road bridge across the Avon Gorge in Bristol, his home town. Brunel's design was for a suspension bridge with the roadway suspended from chains rather than cables. The main span of 702 ft 3 in (214.05 m) was the longest in its day. In 1831 the results of the tender were announced with Brunel's Clifton Suspension Bridge judged as the winner. Work started immediately but was abandoned in 1843 when Bristol's City Council ran out of funds. After Brunel's death in 1859, work on the bridge was restarted as a memorial to its designer with funds raised by the Institution of Civil Engineers. It was completed in 1864.


  • 1835 - 1838 Maidenhead Railway Bridge
  • The Maidenhead Railway bridge was designed to carry Brunel's Great Western Railway (GWR) over the Thames. As with the Box Tunnel, Brunel's objective was to avoid inclines so that the elevation of the bridge had to be as low as possible above the surrounding fields. At the same time it needed wide spans across the river with high headroom to avoid impeding the river traffic below. Brunel's solution was a brick built bridge with two very wide but at the same time very slender arches of 128 feet (39 m) with a rise of only 24 feet (7 m). At the time it was the widest span for a brick arched bridge and today it still an essential link in the main line carrying high speed trains from London to the West Country.


  • 1848 - 1859 Royal Albert Bridge at Saltash
  • The Royal Albert Bridge is a railway bridge linking Devon with Cornwall spanning the River Tamar at Saltash. Because of the terrain, the railway approaches the bridge from both sides of the river on curved tracks and it was not possible find a simple construction which balanced the horizontal thrust on the bridge piers. Brunel's solution was to use a lenticular truss construction, also known as bowstring girder or tied arch construction, to carry the track bed. Heavy tubular arches in compression formed the top chords of the trusses, and chains in tension formed the bottom chords, balancing the compression forces in the arches. These trusses simply rested on the piers without exerting any horizontal thrust on them. The unique design used two spans of 455 feet (138.7 m) each. Construction started in 1848 and the bridge was opened by Prince Albert in 1859. Like the Maidenhead Bridge it is still carrying mainline rail traffic today.


The Railways

  • 1833 - 1841 Great Western Railway - GWR
  • Despite having no experience in railway construction, in 1833, just four years after the Rainhill Trials had established the viability of public railway systems, at the age of the 27 Brunel was appointed chief engineer for building the Great Western Railway between London and Bristol.

    The estimated price of the route was to be £2.8 million. Government approval was given and construction was started in 1835.

    As was typical of Brunel, he was personally involved in every aspect of the enterprise, from raising the finance to project management and everything in between. He set the highest standards for design and workmanship and took personal charge of every detail of the design, from all the bridges and tunnels along the line, the railway stations at the ends of the line down to the architectural details of their lamp posts and even the contractors' tools.

    Brunel himself surveyed the entire route between London and Bristol, a distance of 118 miles. His target was to minimise inclines and curves so that the trains could run at high speed with increased passenger comfort.

    Responsibility for providing the trains was delegated to Daniel Gooch, an engineer who had trained with Robert Stephenson. For even higher speed and comfort, Brunel specified his trains to run on tracks much wider than the conventional "Stephenson's gauge" of 4 ft 8 1⁄2 in (1,435 mm). He chose to set his tracks 7 ft 0 1⁄4 in (2,140 mm) apart, on what became known as Brunel's "broad gauge". This added significantly to the cost of the bridges, tunnels, embankments and cuttings all along the line and required specially made trains to run on the tracks. This no doubt provided better comfort and speed but it was incompatible with the rest of the rail network making interconnections with the existing railway system difficult. This was one of the first ever standards wars and as has happened many times since, the superior technical system eventually lost out (in 1892) to the inferior system and had to be replaced because the older system had built up a much greater user base. (See The Stockton and Darlington Railway).


    Telegraph signalling using Cooke and Wheatstone's system was installed between Paddington station and West Drayton on 9 April 1839, a distance of 13 miles (21 km). It was the first commercial use of telegraph signalling on the railways.


    Brunel set the standard for railway excellence. When the line was completed in 1841 the alignment was so straight and level that some called the line "Brunel's Billiard Table" and the GWR was affectionately known as "God's Wonderful Railway".

    But the work had cost £6.5 million, more than double the original estimate, and thanks to the problems at the Box Tunnel it was one year late.


High Speed Trains?

  • Brunel's GWR, 118 miles (190 km) long, was completed in 1841, 6 years after approval by parliament, using an army of navvies equipped with only picks and shovels. It used Brunel's unique broad gauge track for which new trains had to be developed and manufactured during the same period.
  • 177 years after the GWR was approved, Britain's new High Speed Train system HS2 connecting London with Manchester and Leeds with 330 miles (531 km) of narrower, standard gauge track was announced by the government in 2012. Using powerful earthmoving equipment, tunneling machines, prefabricated track and bridge sections and automated track laying equipment it is scheduled for completion in 2033, 21 years after initial approval, including time for consultations and further approvals, at an estimated cost of over £100 Billion.

The Architecture

The designs for the prestigious railway stations at the termini, and stations in between, of the Great Western Railway are further examples of Brunel's versatility.


The Ships

Brunel's vision extended beyond the shores of Great Britain. He envisaged the Great Western Railway (GWR) as the first link en route to North America with the second link carried by steam-powered iron-hulled ships. Before the GWR was completed he set about fulfilling that dream.

As with all of his projects, his ideas were big. In the case of naval engineering there were good technical reasons justifying his opinions. He was aware that the volume or carrying capacity of a ship is proportional to the cube of its dimensions, whereas the water resistance is proportional to the cross sectional area of the ship below the water line and, to a lesser extent to, the surface area of the ship in the water and these are both proportional to the square of the ship's dimensions. This meant that larger ships would be more efficient and that larger steam powered ships would need comparatively less fuel. This was particularly important for ocean going ships since their range was limited by the amount of fuel they could carry.

There are however practical limits to the size a ship can be, due to the flexing or hogging of the hull as it passes over the waves which affects their seaworthiness. The installation of a heavy steam engine in the ship would tend to make this worse. Wooden-hulled ships are particularly prone to hogging and their length is limited to about 300 feet (100 m) whereas the hull of an iron ship can be made much more rigid and thus less subject to hogging so that much bigger ships are possible. The conclusion was that in order to carry sufficient fuel as well as the cargo across the Atlantic in steam powered ships they would have to be big and preferably iron-hulled.


As ever, Brunel was undaunted by his lack of experience in this new endeavour but went on to design and build three ships that revolutionised naval engineering.


  • 1836 - 1837 SS Great Western
  • Brunel's first ship, the 'Great Western', was the first steamship designed to provide a transatlantic service. It was an oak-hulled, paddle wheel steamer with a displacement of 2300 tons, powered by two Maudslay and Field steam engines with a combined output of 750 horse power driving side-wheel paddles. The hull was reinforced with iron straps to increase its rigidity and it had four masts to carry auxiliary sails. At 236 feet (72 m) long, it was the longest ship in the world and had the capacity to carry128 first class passengers with 20 servants and 60 crew.

    It was launched in 1837 and then sailed to London where it was fitted with the engines. On the return journey to Bristol the following year, under her own steam, fire broke out in the engine room. When Brunel went to investigate, he was descending a ladder down into the engine room when it gave way due to damage from the fire and he fell 20 feet (6 m) to the floor landing face down and unconscious in the water being used to douse the flames. Seriously injured once more, he missed the maiden voyage to New York eight days later. As a result of the fire, 50 passengers cancelled their bookings. In 1837, only 9 years after the first demonstration of practical mobile steam power at the Rainhill Trials, the thought of crossing the Atlantic powered by a noisy, newfangled and possibly unreliable steam engine must have terrified the bravest of souls.


    On 4 April 1838, while the Great Western was being readied for the journey, The Sirius, a smaller ship, with a displacement of 1,995 tons, designed to operate a ferry service between London and Cork in Ireland, was chartered by a rival company, British and American Steam Navigation, and left Cork destined for New York instead of London. Similar to the Great Western but smaller, it was a side-wheel, wooden-hulled steamship, 178 feet 4 inches (54.4 m) long with two masts for auxiliary sails, also built in 1837 (by Robert Menzies & Sons in Scotland) but never intended for crossing the Atlantic. Although it was overloaded with the maximum amount of coal it could carry, it was not enough to complete the journey, and the crew burned the cabin furniture, spare yards which carry the sails and one of the masts in their attempt for the Sirius to be the first ship to cross the Atlantic under its own steam. Sailing ships normally did the journey in 40 days, but the Sirius made the crossing in 18 days, 4 hours and 22 minutes at an average speed of 8.03 knots (14.87 km/h).


    The Great Western embarked on her maiden voyage from Bristol, to New York four days after the Sirius left Cork and arrived in New York with 200 tons of coal still aboard just one day after the Sirius, after a crossing 220 miles longer, making the journey in 15 days 5 hours at an average speed of 8.66 knots (16.04 km/h). The Sirius made only one more round trip to New York, whereas the Great Western made a total of 45 round trips for its owners in the following 8 years before it was sold.


    Note: Neither of these ships was the first steamship to cross the Atlantic. That record was claimed in 1819 by the American steamship the SS Savannah which was tiny by comparison.


  • 1839 - 1843 SS Great Britain
  • Brunel made several proposals for a sister ship to the Great Western. His final proposal in 1839 was for the SS Great Britain, designed to carry 252 passengers (later increased to 730) and 130 crew for a cost of £70,000. It was the first to use a screw propeller to drive an iron-hulled steam ocean going ship. Bigger still than the Great Western, it was the largest ship afloat, 322 ft (98.15 m) long with a displacement 3675 tons powered by engines weighing 240 tons with a rated power of 1,000 H.P. and 5 schooner rigged and 1 square rigged mast to carry auxiliary sails. The final cost was £117,000.

    Launched in 1843 the Great Britain was the first iron ship to cross the Atlantic making the voyage from Liverpool to New York in 1845 in a time of 14 days. Screw propellers had recently been claimed by Ericsson to be more efficient than paddle wheels and the Great Britain was fitted with a six bladed screw propeller with a diameter of 15 feet 6 inches (4.7 m), which was only 5% less efficient than modern day propellers. This enabled her to achieve speeds of 11 to 12 knots (20 to 22 km/h).


  • 1854 - 1858 SS Great Eastern
  • In 1852 Brunel was employed by the Eastern Steam Navigation Company to build another ship. His challenge was to design a ship to carry 4,000 passengers with a crew of 418 around the world without refuelling. (At the time there were no bunkering services to refuel ships en route to Australia). To accomplish this the ship would have to be big. Very big!


    His answer was the Great Eastern. Aided by John Scott-Russell, an experienced naval architect and ship builder, Brunel conceived and built the Great Eastern, an iron ship with a displacement of 32,160 tons, it was 692 ft (211 m) long, only 22 % shorter than the 882 ft 6 in (269.0 m) Titanic which was launched 53 years later in 1911. It was powered by five steam engines with a total output power of 8,000 H.P. (6.0 MW). Four of the engines drove two paddle wheels, each 56 feet (17 m) in diameter, and the fifth powered a four bladed screw propeller with a diameter of 24 feet (7.3 m) which enabled the colossal ship to reach a speed of 14 knots (26 km/h). She also had six masts to carry auxiliary sails. The ship was also the first to be constructed with a double-skinned hull, a safety feature which was decades ahead of industry practice.

    Brunel estimated the cost of building the ship to be £500,000. It ultimately cost double that.


    Its keel was laid down at Millwall on the Thames on 1 May 1854 and construction took just over three years to complete. Because it was so long, the ship had to be launched sideways into the narrow river. (See pictures The SS Great Eastern).

    The launch was scheduled to take place on 3 November 1857 but the enormous ship refused to budge. Two more unsuccessful launch attempts were made first using winches and then hydraulic rams. The ship was finally launched on 31 January 1858, using more powerful hydraulic rams. Fitting out and sea trials took place during the following year and the ship made its maiden voyage in September 1859. This was unfortunately marred by an huge explosion which blew one of the funnels into the air and released steam which killed five stokers, one was drowned and several others were seriously injured. Six days later Brunel, who had been stressed by a series of difficult engineering and financial problems and was already in poor health, suffered a stroke and died at the age of 53.


    In operation the Great Eastern was beset by accidents and failures both technical and commercial. In 1861 it sustained serious damage in a storm losing one of its paddle wheels, smashing the other one and breaking the main rudder shaft to the consternation of passengers. The following year, the New York pilot inadvertently steered the ship onto rocks which opened a gash in the ship's outer hull over 9 feet (2.7 m) wide and 83 feet (25 m) long, some 60 times the area of the damage which caused the sinking of the single hulled Titanic after its collision with an iceberg. Fortunately the Great Eastern's double hull saved it from a similar fate.

    Though it may have been an engineering wonder, the Great Eastern was not a commercial success. There was insufficient traffic to fill its great bulk and, in any case, most of the docks and harbours in the world were not big enough to accommodate a ship six times bigger than anything known before so it never sailed on the long routes for which it was planned.


    In 1864 the Great Eastern was sold by auction for £25,000 to Brunel's railway locomotive engineer Daniel Gooch who converted it into a cable laying ship. One of its funnels and some of the boilers were removed and the sumptuous passenger rooms and saloons were ripped out to make way for three huge iron tanks to carry 2,600 miles (4300 km) of cable and the cable paying-out gear on the decks. In 1866 the Great Eastern was used to lay the first successful transatlantic telegraph cable replacing the damaged cable of 1858.


Stepping beyond the boundaries of familiar surroundings into uncharted territory is always subject to meeting unexpected hazards and the possibility of making a wrong turn. Brunel was not immune from this and sometimes rode into a dead end. Unfortunately because of his forceful character he often took a lot of people with him. A couple of examples follow:


Abandoned Projects


In "Man and Superman", George Bernard Shaw wrote "The reasonable man adapts himself to the world: the unreasonable one persists in trying to adapt the world to himself. Therefore all progress depends on the unreasonable man.". Perhaps he was thinking of Brunel when he wrote it.


(See picture Brunel - Engineering Superman)


1823 Johann Wolfgang Döbereiner discovered that Hydrogen gas "spontaneously" ignited in the Oxygen of the air when it passes over finely spread metallic Platinum. He used the phenomenon, an example of what we now call catalysis although he was not aware of it, in the design of a "Platinum Firelighter".


1824 Pure Silicon first isolated by Berzelius who thought it to be a metal while Davy thought it to be an insulator.


1824 While steam engines were still in their infancy, twenty eight year old French physicist and military engineer, Nicolas Léonard Sadi Carnot published "Réflexions sur la Puissance Motrice du Feu" ("Reflections on the Motive Power of Fire") in which he developed the concept of an idealised heat engine: the first theoretical treatment of heat engines. He pointed out that the efficiency of a heat engine depends on the temperature difference of the working fluid before and after the energy conversion process. This was later stated as:

η = (Th - Tc)/Th      or      η = 1 - Tc/Th

where η is the maximum efficiency which can be achieved by the energy conversion, Th is the input (hot) temperature of the working fluid in degrees Kelvin and Tc is its output (cold) temperature. This became known as Carnot's Efficiency Law and still holds good today for modern steam turbines and geothermal energy conversion. Carnot also showed that in a reversible process some energy would always be lost providing an early insight into the Second Law of Thermodynamics.

See also Heat engines.


See more about Steam Engines.

.

1825 Ampère showed that the plane of a magnetic field is perpendicular to the direction of its associated electric current and that the electric current is proportional to the changing magnetic field that produces it, or alternatively, the magnetic field in the space surrounding an electric current is proportional to the current which produces it. The following relationship applies:

cB.dl = μ0.Ienc

Where:

C is a closed curve on the plane of the surface enclosing the magnetic field.

c is the line integral around the closed curve C.

B is the magnetic flux density (strength) of the magnetic field.

dl is an infinitesimal vector element (tangent with length l) of the curve C.

μ0 is the magnetic constant or permeability of the medium supporting the field.

Ienc is the total current passing through the surface bounded by the curve C


Now known as Ampère's Law, it laid the foundation of electromagnetic theory. Ten years later Gauss derived an equivalent equation for electric fields.


1825 British electrician, William Sturgeon credited with inventing the first practical electromagnet (5 years after Arago), a coil, powered by a single cell battery, wrapped around a horseshoe magnet. The world's first controllable electric device.


1825 Aluminium was first discovered by Øersted.


1825 The Stockton and Darlington Railway, the world's first public railway was opened with George Stephenson at the controls of his steam engine the Locomotion pulling 36 wagons - twelve carrying coal and flour, six for guests and fourteen wagons full of workmen.


Stephenson was self taught and didn't learn to read and write until he was eighteen. Working as an engineman at the colliery in 1813 he was over thirty years old when he was permitted to tinker with the mine's steam engines. One of his early innovations was to use wrought Iron rail tracks to replace the brittle cast Iron tracks, originally designed for horse drawn wagon ways, to enable them to carry the heavier steam engines.


In 1815 he designed a miners' safety lamp which could be used in coal mines where the seeping of methane gas from the deep coal seams could result in an explosive atmosphere. A year later the well connected Humphry Davy designed a similar lamp which was named the Davy lamp in his honour overlooking the contribution of the diffident Stephenson.


For the Rainhill Trials in 1829, a competition to select the engine for the new Liverpool Manchester railway, Stephenson designed the Rocket a steam engine which reached a speed of 29 m.p.h. (46 km/h) and won the competition outright. This was the first time that people had been conveyed in a vehicle at speeds greater than could be achieved on horseback and caused great excitement. (See diagram of Stephenson's Rocket).

Its performance and adoption by the railway company started a frenzy of railway building - revolutionising the transport of goods, changing the patterns of industrial development, bringing travel within the possibility of the masses and with it - new aspirations. Together with Watt's steam engine, Volta's battery and Faraday's electric motor, the development of the railways was a key driver in the Industrial Revolution.


Stephenson's Rocket used many of the innovations pioneered by Richard Trevithick and established the basic configuration of the steam locomotive. As in Trevithick's Pen-y-Darren engine it used steel wheels on steel rails, high pressure steam, double acting pistons and a "blast pipe" in the chimney. Improved features included flanged wheels rather than the flanged tracks used by Trevethick, a multi-tube boiler with 25 small diameter fire tubes running the length of the boiler to improve the heat transfer from the firebox gases into the boiler water and a more reliable drive system. For lightness and simplicity, only the two front wheels were driven and the drive was by means two horizontal pistons one on either side of the boiler through crank mechanisms directly coupling the piston connecting rods to the wheels.


The basic design principles embodied in the Rocket were soon adopted for steam trains in many countries of the world and endured until the demise of steam trains in the 1960s and the standard (or Stephenson) gauge (the distance between the rail tracks) of 4 ft 8½ in (1,435 mm) adopted by Stephenson for his railways is used in sixty percent of the worlds railways.

In later years George Stephenson was ably aided by his son Robert who contributed to the design of the Rocket and was particularly active in organising the civil works and building bridges to carry the Stephenson's tracks, spreading the railway network throughout the world.


See more about Steam Engines

.

1825-1843 The Thames Tunnel, the first successful tunnel underneath a navigable river was designed and constructed by Marc Isambard Brunel.

In response to the demand for a much needed land link between the London docks of Rotherhithe and Wapping on opposite sides of the river Thames, Brunel teamed up with a most unlikely partner, Scottish Thomas (Lord) Cochrane (see following footnote), to design a tunneling shield, which they patented in 1818, to facilitate the construction of a tunnel under the river.

They took their inspiration from the feeding and digestive process of the shipworm, "teredo navalis", which, it was claimed, "had sunk more ships than all the cannon ever cast". The shipworm was a huge mollusc, nine inches (230 mm) long and half an inch (13 mm) in diameter. Its body was soft and transparent but its head was formed by jagged shells which bored into, and ground up, the wood which it ate as it bored its way into the ship's timbers, lining and protecting the pathway it left in the bore behind it with petrified excreta.

Their design for the shield envisaged a large frame, weighing 80 tons, with 3 levels, each level with 12 cells or platforms in each of which a miner excavated the wall in front of him. The cells would be open at the back but closed at the front with removable horizontal boards to stabilise the earth on front and to limit water ingress. The boards could be removed one at a time to enable removal of a strip of earth to a depth of 4½ inches (11.5 cm) and then replaced so that the next strip could be excavated. The frame would then be moved forward 4½ inches by hydraulic rams or screw jacks and a masonry lining would be applied to the section of the walls of the tunnel just vacated by the frame to seal it and give it strength after which the process would be repeated until the tunnel was complete.


By 1823, Brunel had produced plans for the tunnel and the Thames Tunnel Company was formed in 1824 with financing secured from prominent private investors who included a local businessman, brother George of William Hyde Wollaston, Vice-president of the Royal Society and son Timothy of Joseph Bramah, inventor of the hydraulic press. They were joined in 1828 when the project as running out of money by others including Henry Maudslay who had made the machines for Brunel's block making factory and the Duke of Wellington, The Iron Duke, hero of the Battle of Waterloo who was by then British Prime Minister. Work commenced in 1825 using Brunel's new tunneling shield and steam driven water pumps to provide the drainage, both manufactured by Maudslay.


Brunel's son Isambard Kingdom Brunel had worked on the planning and design stages of the project with his father and in 1826 at the age of 19 was appointed Resident Engineer in charge of delivering the project.


The work was unfortunately fraught with difficulties. The tunnel was 75 feet (23 m) below the river's surface at high tide but only 14 feet (4.27 M) below the deepest part of the river bed and ran the 1,300 feet (396 m) of its length through gravel, sand, clay and mud. Conditions in the tunnel were most unhealthy and at times highly dangerous suffering from poor ventilation and the constant leaking sewage laden water and several times from flooding when the water broke through the roof. At the time the river itself was like an open sewer, devoid of fish and wildlife. (It was not until 1858, the year of "the Great Stink" that work was started on Joseph Bazalgette's plan for the construction of London's sewage system to manage waste and clean up the river). Accidents were common, many of them fatal. Isambard who often spent 20 hours per day working at the site submitted himself to the same conditions as his workers and paid attention to their needs, meeting and providing for the casualties which inevitably occurred. He was caught himself in one devastating inundation in 1828 and was seriously injured and lucky to escape with his life. Others were not so lucky. All this resulted in delays and cost over-runs until later in 1828 the company ran out of money. Despite pleas from its high profile backers, the company was not able to raise enough cash to carry on and work was suspended for seven years until the project was rescued by Government aid in 1835. This enabled the work to be re-started with a new tunneling shield weighing 140 tons and the tunnel was finally completed in 1843.

Although it was originally intended for pedestrian and horse drawn traffic it eventually became part of London's underground railway system and is still in use today.


  • Footnote:
  • Lord Cochrane was an audacious, charismatic and successful captain in the Royal Navy during the Napoleonic wars and a radical member of the British Parliament to which (aided by bribery) he was elected in 1806. He was however dismissed from both the Navy and Parliament in 1814 after being convicted of fraudulent share trading on the London Stock Exchange. He and his accomplices were charged with perpetrating an elaborate hoax by faking a report that Napoleon had been killed in battle, (a year before the Battle of Waterloo). In the days before the electric telegraph, this could not be verified, and the price of government stocks rose substantially on the news enabling Cochrane and his co-conspirators to sell, at a huge profit, shares which they had acquired just one month before. After his conviction Cochrane returned to the sea, taking charge of the Chilean Navy in late 1818 in their successful revolutionary war of independence from Spain and in 1823 repeating the exploits in Brazil's war of independence from Portugal. A similar role fighting for Greece in their 1827 campaign for independence from the Ottoman Empire had less spectacular results but nevertheless contributed to their success. His exploits became the inspiration for novelist C. S. Forester's fictional hero Horatio Hornblower.


1826 Italian physicist Leopoldo Nobili together with fellow Italian Macedonio Melloni developed a thermoelectric battery based on the Seebeck effect, constructed from a bank of thermocouples each of which provided a very low voltage of about 50 milliVolts. Nobili also invented a very sensitive astatic galvanometer which compensated for the effect of the Earth's magnetic field. The pointer was a compass needle suspended on a torsion wire in the current carrying coil. A second compass needle outside of the coil compensated for any external fields.


1826 German physicist and chemist Johann Christian Poggendorff invented the mirror galvanometer for detecting an electric current.


1826 At the age of fourteen Albert Krupp dropped out of school and took over responsibility for running the Krupp family's steel making business at Essen in Germany after the death of his father Friedrich Krupp. When he arrived on the scene, the company was in debt and on the verge of bankruptcy and had only seven unhappy employees including five smelters and two forgers. The smelters were furnace men who controlled the steel production and its composition, which in turn determined its properties. The forgers were skilled blacksmiths who shaped the metal. By the time of his death in 1887, Albert had built the business up to be Europe's largest industrial company with 75,000 employees of which 20,000 were based at the Essen steelworks and the rest employed in other branch steelworks, iron ore and coal mining operations in Germany and Spain, owned by the company, as well as on railroads and a small fleet of ships bringing the raw materials to the factories. Half of this enormous business was involved in manufacturing armaments which were supplied to the armies and navies of 46 nations.


Albert's forebears had some experience in arms and steel making but the road to 1826 had been a bumpy one. The first Krupp venture into the armaments trade was made by Anton Krupp, eldest son of Arndt Krupp, a wealthy Essen trader in in wine, groceries, property, and money lending. The Krupp family had settled in Essen during the sixteenth century, just before an outbreak of the black death plague and, despite the adversity, had prospered by buying up the property of families fleeing from the plague.

In 1612, Anton married Gertrud Krösen the daughter of a local gunsmith and consequently became involved in his father-in-law's business manufacturing guns. Essen was one of the two gun making centres in Germany (the other was Suhl) and guns had been made there since 1470 and by 1608 there were 24 gunsmiths in Essen selling firearms to armies and princes. Six years later most of Europe was convulsed in the calamitous Thirty Years War (1618 to 1648) which wiped out over 20% of the German population. Essen was unfortunately located in the midst of this devastation between the warring Protestant and Catholic forces but its gunsmiths and arms merchants flourished selling weapons to the armies of both sides in the conflict. By 1620 the number of Essen gunsmiths had risen to 54 producing 14,000 gun barrels per year, of which 1,000 per year were made by Anton's factory. See how gun barrels are made.


After the war the Krupp family did not pursue gun making but for the next four generations they concentrated on trading and on offices of public administration. It was 150 years before they made their first foray into iron and steel making.

In 1751, Jodocus Krupp married Helene Amalie Ascherfeld, both direct descendants of Arndt Krupp. The unfortunate Amalie outlived both her husband and her son and inherited the Krupp's considerable wealth becoming known as the Widow Krupp. A determined business woman, she expanded the family's holdings in textile production and coal mines and in 1799 she acquired the Gutehoffnung (Good Hope) ironworks, to which she had provided a mortgage, as a settlement when the firm went bankrupt. Located on a stream near Essen, it incorporated a foundry and blast furnace which made cast iron pots, boilers and weights.

In 1800 the reorganised Good Hope forge started operations using local ores making kitchenware, stoves, weights, farm tools and cannon balls returning the business to profit. It was Krupp's first iron making plant.

In 1807 Widow Krupp's grandson Friedrich Krupp, at the age of 19, was put in charge of the forge and the operation went downhill. He had ambition and a vision of making more technical products for the new steam age including pistons, cylinders, engine parts and steam pipes, but he had no technical knowledge of iron making and his management skills were disastrous. The business started losing money and the wily Widow sold it for a profit a year later when he was ill.


In 1810 The hapless Friedrich inherited the family fortune after the death of his grandmother which gave him the opportunity to get back into the iron and steel business. Not only did he have the cash to indulge his passion, but advantageous market conditions made it an attractive prospect. At that time, Napoleon Bonaparte had implemented a blockade against Britain, denying it's goods access to mainland Europe. These goods included crucible steel which was used to make high value items such as cutlery, tools and scissors and were highly prized in Europe for their high quality and strength. Crucible steel had mostly been imported from England and was known as "English Steel" since Benjamin Huntsman, who pioneered the process in 1740, had managed to keep it a secret. In response to the continuing demand in Europe, Napoleon offered a prize of four thousand francs to anyone who could replicate the process, a prize which reinforced Friedrich's interest.


In 1811 Friedrich used his inheritance to found the Krupp Gusstahlfabrik (Cast Steel Works) with the premature, if not misleading, claims "for the manufacture of English Cast Steel and all products made thereof" and that he possessed the secret process of English Steel. Unfortunately Friedrich was more of a dreamer than a businessman and he proceeded to squander the Krupp family's entire fortune.

Since the crucible steel casting process was unknown in Germany at the time, to get the business off the ground, he offered partnerships to two self proclaimed "metallurgy experts", the von Köchel brothers, who claimed to know the secret formula. Together they built a foundry on the banks of the Ruhr River in Essen with a furnace for making blister steel by the cementation process, together with smelting furnaces and a large water powered forging hammer but things soon started to go wrong. Some blister steel was produced by conventional means but this was mainly produced to feed the crucible process and had limited sales prospects. It turned out however that the von Köchel brothers were frauds and knew nothing about metallurgy or crucible steel manufacturing, and though they produced unusable steel, they remained in the company until 1814, leaving it in debt. The following year Friedrich was swindled a second time by a new partner, a Prussian Hussar called Nicolai, with fake credentials who left him with more unusable steel and even greater debts.

Even the Ruhr River flow proved unreliable leaving the plant without power for the furnace bellows and the forging hammer for prolonged periods causing missed delivery dates. This forced Friedrich to subcontract his hammer work since he was unable to afford the purchase of a steam powered hammer.

Eventually in 1816 after five years of experimenting, he was able to smelt his first steel and began to produce files, drills, tools, dies, coin presses and rolling mill blanks. By that time, a year after the Battle of Waterloo, Napoleon and the blockade were long gone and imported cast steel was available once more.

In 1818, buoyed up by his modest success Friedrich constructed a massive new factory on Essen's Berne river, designed to accommodate sixty smelting furnaces, though he only had sufficient work for eight of them, and a huge 800 pound (360 Kg) water powered forging hammer. He did manage to achieve some sales, mostly steel dies for coin making at the Prussian mint and some orders for steel for bayonets and gun-barrels from the royal ordnance factories on the Rhine, but the Berne river flow was just as unreliable as the Ruhr. Operations were intermittent and the company was losing money and his credit was running out. In response he increased prices and attempted to reduce costs by compromising on the product quality by adulterating the materials with scrap steel. The result was decreasing sales and ever increasing losses.


Friedrich was obsessed with technology and spent much of his time in the plant neglecting the wider responsibilities of the business. He had no appreciation of the importance of financial controls or of securing markets, supplies of raw materials and fuels. By the time of his death at the age of 39 in 1826 the Krupp Gusstahlfabrik had been in operation for 15 years. It had only seven employees. It was in debt and virtually bankrupt and the Krupp family fortune was gone.

It was from these inauspicious circumstances that, assisted initially by his mother Thérèse Krupp, the new Widow Krupp, the impoverished young Alfred Krupp built the company into one of the world's greatest engineering enterprises.


Widow Krupp didn't make it easy for young Alfred. She announced that his father, Friedrich, had passed on to him "the secret of manufacturing cast steel", a claim which was hard for the 14 year old to live up to. Fortunately he did not inherit the weaknesses of his father. He was a perfectionist but he was also practical, painstaking and thorough. He took his new responsibilities seriously and his devotion to the company became an obsession. He spent his entire waking hours working on company business, toiling alongside the workmen during the day, writing letters to customers and carrying on his father's experiments to find or improve the "secret process" at night. As control of the factory improved he began to devote more time to establishing a sales network, travelling widely and frequently throughout Europe, building the company through technology and market developments with disciplined management and financial controls.


Technology Developments

Progress was slow at first. The factory was no more than an artisan workshop with a limited product line, mostly flatware, consisting only of a few tools and knives and occasional coin dies for the mint.


  • Product Strategy
  • To revive the company, Alfred borrowed money from other family members to invest in new technology to expand and diversify the product line, a strategy which became typical of his management, but for many years the factory scarcely paid its way and did not break even until 1837. His first major development, which came in 1830, was the production of steel rolls, for use in rolling mills, which he later customised for manufacturing spoons, forks and coin dies for local markets. He backed up his sales effort by guaranteeing quality workmanship.

    The opportunities in the railway and armaments businesses which eventually became Krupp's main source of revenue did not arise for almost 20 years.


  • Steelmaking Process
  • Expertise in metallurgy and steelmaking were the foundations on which the Krupp enterprise was based and Alfred continued to work long and hard to develop and perfect new technologies and to build a strong patent portfolio. As late as 1838 he went on a spying trip to England where he stayed for five months in attempt to discover the secret of Huntsman's crucible steel. By that time however the principle of the process, if not the practice, was fairly well known and he didn't learn anything more than he already knew. He jealously guarded his own technology developments however as well as his the company's financial status and his staff were sworn to secrecy.

    Where he did make a breakthrough was in the production of very large steel castings. By the early 1850s, the only way to make high quality cast steel was by Huntsman's crucible process, but the largest practical crucibles available could only contain between about 40 to 50 pounds (18 to 23 Kg) of the melted steel. In order to make a large solid ingot, the molten steel must be poured continuously into the mould so that the mould is completely filled before any part of the ingot begins to solidify, otherwise the structure of the ingot will not be homogeneous and hence would be weaker. In practice this meant that it was only possible to cast small objects with steel from a few crucibles before the steel temperature dropped too low.

    In 1851, Alfred astonished attendees at London's Great Exhibition with his display of a flawless cast steel ingot weighing 4,300 pounds (1,950 Kg) and a muzzle loading six pounder cannon made of cast steel, previously thought to be not possible. This was an achievement of logistics rather than metallurgy. Using 50 pound crucibles it would require 86 crucibles heated in over 20 furnaces, each containing four crucibles, to be brought to the required temperature simultaneously and a gang of 50 men working in pairs with military precision to take the crucibles from the furnaces, to carry them to the mould and pour in their contents within the short timescale allowed before any of the steel begin to solidify.

    This exhibition caused a sensation in the industrial world bringing fame to Krupp and the Essen works and was a major turning point for the business.

    In 1862 Alfred Krupp was the first to use the Bessemer process for the mass-production of steel in continental Europe. This replaced the slow and costly crucible steel process and gave Krupp a competitive edge.

    In 1869 He also pioneered the use of the new open-hearth process of steel casting bringing further productivity gains.


  • Machinery
  • Alfred also invested in, and developed, new machines to improve the efficiency and scope of his operations. As sales increased, in 1835 he was able to buy a steam engine to power his forging hammer eliminating his dependence on the unreliable river flow.

    In 1841 a Munich goldsmith and engraver named Wiemer, commissioned some custom engraved rolls for producing three dimensional shapes from flat plate by engraving the shape and pattern of the article to be produced in relief on the rolls. After the rolls were delivered, Alfred's brother Hermann adapted the process for the manufacturing of steel spoons, cutlery and other parts for silverware enabling Krupp to open a large silverware factory in a joint venture with a Viennese entrepreneur Alexander Scheller to produce goods for export.

    In 1861, as Krupp took on projects for the railways and the army requiring larger castings and forgings, Alfred developed "Fritz", a steam forging hammer with a 50-ton blow. For many years it was the most powerful in the world.


  • Railway Tyres
  • The beginning of the construction of the German railway system in 1835 brought new opportunities for the Krupp factory which produced steel axles and springs for the rolling stock, but Krupp's biggest breakthrough which propelled the company into the big league was the invention in 1851 of the weldless steel tyre which he patented the following year.

    Early railroad carriage wheels had been made from a single piece of cast iron which is very brittle and unsuitable for carrying dynamic and shock loads causing them to break or wear out very quickly. This excessive wear problem was initially overcome by redesigning the wheels to incorporate more durable, replaceable steel tyres in the form of a hoop fitted around the rim of the wheel disc. The tyre included both the surface bearing on the track as well as the flange which kept it on the rails. These tyres were manufactured by heating and bending a steel bar with a suitable cross section into a circular hoop and welding the ends together, or alternatively, by a two piece construction using two shorter bars forged into semicircular arcs and welded together to form the hoop. The steel tyres were then heat shrunk onto the cast iron wheel. Though this was an improvement, the wheels were still vulnerable to wear and breakage because of the weakness of the welds. Replacing a damaged tyre put the train off the tracks for several days causing a major service interruption.

    In his search for a better solution, Alfred carried out his experiments using lead, so that he could easily melt down his failures, and avoid losing the material. The seamless steel tyres he developed were cast in a single piece and forged so they did not need welding. The tyre was machined with a shoulder on its outer face to locate it on the wheel rim, and a groove on the inside diameter under the flange face. See diagram. The internal diameter of the tyre was machined to be slightly less than the diameter of the wheel on which it was to be mounted, to give an interference fit. The tyre was fitted by heating it causing it to expand so it could be slipped over the wheel. After the tyre cooled, a shaped steel bar rolled into a hoop was fitted into the groove to act as a retaining ring and hydraulically operated rolls swaged the groove down onto the ring.

    Krupp's weldless cast steel tyres could withstand the ever increasing speeds achieved by the trains. Unlike welded steel tyres, they did not fracture under pressure and lasted four times longer than the tyres they replaced.

    Seamless tyres quickly became the source of Krupp's primary revenue stream, mainly from sales to railways in the United States and profits from this business funded the development of armaments. By the 1870s, thanks to the capacity of the Bessemer and open hearth converters to produce huge volumes of steel, Krupp was also shipping over 170,000 tons of steel rails per year to the United States until they were eventually overtaken by the rapidly growing U.S. steel industry.


  • Armaments
  • Krupp's entry into the manufacture of arms was much slower. He started in a small way between 1836 and 1842 producing hollow forged muskets and in 1843 he made his first rifle with a cast steel barrel which was sent to the Prussian state military agents. This was followed up in 1847 with the first cannon made of cast steel, a muzzle-loading 3 pounder but the Prussian military were not impressed by this new technology. Like the British and French armies they preferred tried and tested heavy cannons, cast from bronze, over the new lightweight guns. His next steel cannon was the 6 pounder which caused a sensation when it was demonstrated at the 1851 Exhibition in London. Despite the acclaim there were no customers and Alfred gave it to the King Frederick William of Prussia who used it as a decorative piece.

    Undeterred, Krupp consequently sold his guns to other international customers, some of whom were potential enemies of Prussia. Four years later, Albert produced a cast steel smooth bore muzzle loading 12 pounder cannon for the 1855 Paris Exhibition, which was 200 pounds (90 Kg) lighter than the equivalent bronze gun. He also created a stir when the 10,000 pound (4,500 Kg) cast steel ingot he was exhibiting crashed through the exhibition's wooden floor crushing all in its way. After the exhibition, the Turkish viceroy of Egypt purchased 36 of the guns. Further contracts followed over the next few years with Belgium, Russia, Holland, Spain, Austria, Switzerland, Württemberg, Hanover and Great Britain.


    Wilhelm I, the Prussian King's brother, who became Prince Regent in 1858 after the King suffered a stroke, was more favourably disposed to new technology, recognising the strategic importance of supporting arms manufacturing in Prussia, but still, the Prussian military did not trust the proposed breech loading or the rifled barrels which promised superior performance to the conventional muzzle loaded smooth bore bronze cannon generally in use at the time.

    Krupp's method of loading the powder charge was an improvement over William Armstrong's 1855 design and used a metal cartridge case in which to load the charge. On firing, the cartridge case expanded against the chamber wall and effectively sealed the breech. It also left less debris in the gun barrel after firing. Krupp's metal cartridge concept is still used in modern day artillery.

    Krupp lobbied hard to overcome the conservativism of the military and in 1859 Wilhelm overrode the military objections and bought Prussia's first 312 cast steel 6 pounder rifled, breech loading cannons from Krupp who became the main arms manufacturer for the Prussian military.

    At the request of the Russians, Krupp adopted Armstrong's "Built up" construction to improve the burst strength of the gun barrels by heat shrinking a white hot outer tube of steel over the cold breech end of the barrel to reinforce it. His guns became known as "ringed" guns.

    The first test of Krupp's breech loading cannons under battle conditions came in the 1866 Austro-Prussian War when Krupp's guns were used by both sides in the conflict. Unfortunately a weakness in the design of the breech mechanism caused several of them to explode, injuring or killing the gunners.


    The problem had been solved by the time of the next major conflict, the Franco-Prussian War of 1870-1871 when Prussia's cast steel, breech loading Krupp cannon pulverised Napoleon III's muzzle loading bronze artillery. The significance was not lost on other governments and armies and orders started pouring in for Krupp guns and Alfred Krupp became known as the "Cannon King".

    Having established his credibility as an arms supplier, Krupp did not rest on his laurels but continued his relentless pursuit of technology excellence building bigger and better guns, exploiting the perceived threats between nations by creating faster obsolescence thus generating more sales and more frequent replacements.


    • Footnote
    • Essen was an Imperial City of the Holy Roman Empire which was annexed by Prussia in 1802.

      Between 1701 and 1918, Prussia was a German Kingdom or state which included parts of present day Germany, Poland, Russia, Lithuania, Denmark, Belgium and the Czech Republic.

      After the 1871 war, Prussia took over the whole of Germany, Prince Wilhelm I who had become the Prussian King in 1861 on the death of his brother, became the Emperor of Germany and Otto von Bismarck the Prussian Prime Minister became the German Chancellor.


Business Practices

Short term profits were never the top priority of the company. It didn't have shareholders clamouring for dividends. It was a family business and security annd continuity were important but its main motivation was to be the best in the world and and to earn the status, influence and honour which went with that achievement. Krupp's company ethos also included a sense of social responsibility and a paternal concern for the wellbeing of its workers for whom it provided generous benefits but this was not entirely altruistic.


  • Sales and Marketing
  • From the early days, Alfred Krupp doggedly pursued international markets, personally travelling abroad, participating in international exhibitions and establishing sales outlets in Europe and overseas. He cultivated friendships in high places. In his native Prussia he stressed his patriotism and won the support of Prince Wilhelm I, though behind his back, business interests took priority and he sold arms to potential enemies of Prussia. He had "common cause" with Bismarck who is quoted as saying "The solution of the great problems of these days is not to be found in speeches and majority rulings, but in blood and iron!" winning him the patronage of the German government. He ingratiated himself with heads of state and military leaders from other nations often giving them examples of the latest Krupp guns.

    To promote his international business he also built a huge weapons proving range to which he invited heads of government and senior military commanders to attend demonstrations of the fire power and capability of his weapons. Guests were treated to lavish hospitality.


  • Vertical Integration
  • As the business grew and the demand for raw materials increased, Albert recognised the dangers of depending on external suppliers and the importance of securing the supply chain. He also wanted to control all aspects of the manufacturing process within his own company to achieve efficiency gains and to ensure quality. In the 1840s he therefore began to acquire ore deposits, coal mines and coke ovens as well as competing iron and steel works to expand his operations.

    In 1872 after the war, business was booming and he bought over 300 ore deposits in various parts of Germany and acquired a holding in the Orconera Iron Ore Company, which owned large concessions in superior low phosphorus iron ore deposits in Spain, often paying over the odds. He even bought the large "Hanover" coal mine even though he already had secure supply contracts with several collieries.

    The same year he also expanded operations by buying two of his competitors' steel works, the Johanneshütte Ironworks with four blast furnaces and the Hermannshütte Ironworks with three.

    The following year he set up his own shipping company in Rotterdam and built a small fleet of four ships to transport his iron ore from Orconera in Spain.


  • Business Funding
  • The Krupp business was forever short of cash even when it was profitable. Family members steadfastly refused to dilute the Krupp family holding or to relinquish any control of the company by putting it into a joint stock company. They were also suspicious of banks. During the early days when the company made losses they made many pleas for government support which usually fell on deaf ears and they depended on family loans to survive. When the company eventually became profitable, the profits were ploughed back into developing the business. Even when sales and profits began to grow rapidly, the cash generated by the business was needed to fund the ambitious expansion plans. This caused a particularly critical problem during the economic panic of 1873 when the euphoria of the 1872 boom wore off and many companies went bankrupt. Krupp's finances were grossly over extended by their profligate purchasing spree in 1872 and they were rescued by the Prussian State Bank. Between 1876 and 1896 German tariffs protected the steel industry from British and American competition keeping Krupp and others profitable during times of recession.

    Krupp continued to look to the government for state support and once its importance as the state's main arms supplier was established, government funding was forthcoming and the connection between the firm and the State of Prussia became increasingly more close and intimate.


  • Industrial Relations and Social Policy
  • Alfred Krupp was a pioneer in providing social benefits for his employees. He had started in 1836 with a voluntary sickness and burial fund and over the years he progressively increased these benefits with company funded health insurance and a pension fund for retired and incapacitated workers.

    In 1845 the company still employed only 122 workmen but by 1865 the work force had risen to over 8,000 reaching 16,000 in 1871. With the rapid expansion of the company it was becoming difficult to house and motivate the increasing workforce so that in the 1860s benefits were further increased to include subsidised housing, hostels for unmarried employees, free health and retirement benefits, widows and orphans benefits, hospitals, schools, libraries, parks, recreation clubs and stores. In return he imposed strict discipline and demanded absolute dedication and loyalty to the company and he got it. Trade unions were not necessary and company loyalty was fanatical.


Krupp Epilogue

After Alfred's death the enormous expansion of the company continued under the supervision of Krupp family members with warships, armour plating, submarines, tanks, railway locomotives, heavy trucks and ever larger guns and ammunition added to the product portfolio. The production of armaments became even more important, boosted by the requirements of World War I and World War II. Krupp almost became an arm of the German government and was closely associated with the Nazi party.

  • Great Guns
    • Paris Gun - Known by the Germans as Wilhelmgeschütze (William's Gun after Wilhelm II - "Kaiser Bill")
    • In 1918 Krupp produced a gun weighing 256 tons intended for bombarding Paris. Fired by a massive 180 Kg (400 lbs) powder charge, its 106 Kg (264 pounder) projectiles had a muzzle velocity of 1,640 m/s (5,400 ft/s), equivalent to Mach 5.0, giving the shells a range of 130 kms (81 miles). During their 182 second trajectory to the target, the shells soared to an altitude of over 42 kms (26 miles), into the stratosphere, the highest point ever reached by a projectile before the rocket powered V2 which had a maximum speed of Mach 5.5 .

      The barrel was 34 m (112 ft) long with a bore of 211 mm (8.3 in), later re-bored to 238 mm (9.4 in). It was so long that it needed an overhead suspension, support truss to prevent it from drooping.

      See a photo of Krupp's Paris Gun.


    • Gustav Gun
    • In 1934 Adolf Hitler commissioned the world's biggest ever gun, capable of piercing one meter (3.3 ft) of steel, seven meters of concrete, or thirty meters of dense earth which Krupp was able to deliver in 1941. Known as the Gustav Gun after Gustav Krupp (See next), it could fire either gigantic 4.8 ton high explosive shells propelled by an explosive charge weighing 700 kg (1,500 lb), or even bigger 7.5 ton concrete piercing shells propelled by a 250 kg (550 lb) charge. The lighter, high explosive shells had a muzzle velocity of 820 m/s (2,700 ft/s) giving them a maximum range of 47 kms (29 miles) and the heavier, armour piercing shells had a muzzle velocity of 720 m/s (2,400 ft/s) providing a range of 38 kms (24 miles).

      This monster gun weighed 1344 tons and was 11.6 metres (38 ft) tall, 7.1 metres (23.3 ft) wide and 47.3 Metres (155 ft.) long with a barrel 32.5 Metres (106.6 ft) long with a calibre of 800 mm (31 in).

      A crew of 500, commanded by a major general, was needed to assemble, load and defend the gun and to excavate and construct a double set of curved railway tracks embedded in concrete to enable the adjustment of its azimuth direction. (The barrel could be moved in elevation but could not swing in azimuth).

      See a photo of Krupp's Monster Gustav Gun.


    • See also Armstrong and Whitworth Guns from previous wars.

During World War I, Gustav Krupp, heir to the business at the time, was one of the German tycoons who took over and plundered the Belgian industry when the country was occupied by German troops and Krupp's weapons were used against neutral targets and non-combatants. After the war, the Krupp factories were broken up by the victors and Gustav Krupp was cited as a war criminal but not prosecuted. He was however forbidden to manufacture arms ever again. Despite this sentence, Krupp participated in the secret rearmament of Germany when Hitler came to power. Indicted once more after WWII for war crimes he escaped trial due to his advanced dementia.

During World War II, the Krupp factories were again feeding Germany's war machine. Krupp's legendary paternal treatment of the workforce however did not extend to the unfortunate masses of slave labour, including POWs, civilians from occupied countries and concentration camp inmates, who were forced to work in Krupp's factories. Eventually the factories were destroyed by allied bombing and Alfried Felix Alwyn Krupp von Bohlen und Halbach, heir to the Krupp dynasty, a member of the German SS, and "Sole Proprietor" of the business, who was in charge at the time was convicted as a war criminal. He was sentenced to 12 years in prison and the confiscation of all of his property.


  • Footnote
  • The name most associated with the growth of the American steelmaking industry is Scottish born Andrew Carnegie. His story is the essence of The American Dream. In 1848 At the age of 13 his parents emigrated to the United States taking Andrew with them when their weaving business fell on hard times. Starting work at the age of 13, working 12 hour shifts, six days a week, in a cotton mill for $1.20 per week as a "bobbin boy" looking after spools of thread, he rose to become the richest person in the world in 1901 (according to J.P. Morgan). It was however as an investor, rather than a technologist that he earned his fortune.


    Carnegie was diligent with a "can do" attitude and an affable personality and his initiative and hard work, together with an element of luck, won him rapid promotions and a circle influential friends.

    His career was meteoric. Thanks to his early schooling in Scotland, he was soon able to assist in clerical work at the cotton mill. After two years he was offered a job as messenger boy with the O'Rielly Telegraph Company where he learned Morse code during the day while studying bookkeeping in a local library at night and at the age of 15 he became a telegraph operator. Two years later in 1853 he moved to the Pennsylvania Railroad Company, also to work there as a telegraph operator and his rise continued. By 1859 he had worked his way up to be Pennsylvania Railroad's Western Superintendent where he saw the importance of the steel industry to America's fast expanding railways.

    On his way up, in 1855 at the age of 20 he was offered a loan by a business friend to buy his first shares in a document delivery company and he quickly developed a passion for investments when he received his first dividend payment,


    By 1862 he had saved enough, together with 5 friends, to make a major investment in his first steel company, Piper & Shiffler, to build steel railroad bridges. This was followed in 1863 by an investment in small iron foundry, the Union Iron Mills.

    In 1865, still working for Pennsylvania Railroad, his annual investment income amounted $40,000, twenty times his already large salary of $2,000 per year. Carnegie then decided to leave and concentrate on investing, particularly in telegraph services and the steel industry, setting up with others, the Edgar Thomson (ET) Steel Company with a huge plant on the outskirts of Pittsburgh. Demand for steel was insatiable, first for railroad tracks and rolling stock, and replacement of the original wooden trestle bridges with steel structures, then for construction projects in the rapidly growing cities. Carnegie acquired several more steel making interests and eventually all of his iron and steel interests were consolidated into a single new company known as Carnegie Steel.


    Carnegie had no experience, nor any particular interest in steelmaking and he treated the steel business purely as an investor. He appointed qualified managers to take care of business operations and the technology. He was however a great promoter of the business and worked to increase profitability by means of ruthless cost cutting and to increase market share by strategic acquisitions of, and mergers with, competitor companies as well as companies supplying raw materials.


    In 1901 Carnegie Steel was bought for $480 million ($13.8 billion in today's money) of which Carnegie's share was $225 million ($6.5 billion) by Wall Street banker J. Pierpoint Morgan heading a consortium involving Carnegie's competitors, American Steel & Wire and the Federal Steel Company, to form US Steel consolidating America's steel industry and eliminating wasteful competition.


    Carnegie spent the rest of his life and most of his money funding educational projects and libraries around the English speaking world, creating opportunities for self improvement of others, just like those from which he had himself benefitted in early life.


1827 German physicist Georg Simon Ohm discovered the relationship between voltage and current, V=IR, in a conductor which is now called Ohm's Law. The importance of this relationship lies less in the simple proportionality but on Ohm's recognition that Voltage was the driver of current.


1827 Scottish botanist Robert Brown studying the suspension of pollen in water, observed the random movement of the grains we now call Brownian Motion. These random movements which were later quantified using statistical methods are also typical of the movement of electrons and ions in an electrolyte. This causes of this phenomenon were eventually explained in 1905 by Albert Einstein using the kinetic theory of gases.


1828 Berzelius compiled a table of relative atomic weights for all known elements and developed the system of symbols and formulas for describing chemical actions.


1828 German chemist Friedrich Wöhler discovered that the salt, ammonium cyanate, was transformed by heat into urea, a compound which occurs in urine and which had hitherto been known only as a product of animal metabolism. He wrote excitedly to his mentor Berzelius, "I must tell you that I can make urea without the use of kidneys of any animal, be it man or dog". This was the announcement of the birth of modern organic chemistry and was the beginning of the end of Berzelius' popular vitalist hypothesis, that "organic" compounds could be made only by living things.


Wöhler also credited with the isolation of pure aluminium (in 1827, after Øersted's discovery in 1825) and was one of the first to isolate the elements yttrium, beryllium, and titanium and to observe that "silicium" (silicon) can be obtained in crystals.


1828 Self taught English mathematician George Green, who worked in his family's windmill till the age of forty, published in a local journal in Nottingham with only 51 subscribers, mostly family and friends, An Essay on the Application of Mathematical Analysis to the Theories of Electricity and Magnetism. It earned him a place at Cambridge as a mature student but its full importance was not recognised at the time until it was rediscovered by William Thomson (later Lord Kelvin) just after his graduation in 1845. Kelvin recognised this as a seminal influence in the development of electromagnetic theory.


1828 French physiologist and biologist René Joachim Henri Dutrochet discovers osmosis - the diffusion of a solvent through a semi permeable membrane from a region of low solute concentration to a region of high solute concentration. The semi permeable membrane is permeable to the solvent, but not to the solute, resulting in a chemical potential difference across the membrane which drives the diffusion. Thus the solvent flows from the side of the membrane where the solution is weakest to the side where it is strongest, until the solution on both sides of the membrane is the same strength equalising the chemical potential on both sides of the membrane.


Semi permeable membranes are now widely used as separators in batteries and fuel cells allowing the passage of certain ions while blocking others.


1828 Hungarian priest and physicist of Slovak origin, Ányos Jedlik built the first direct current electric motor using an electromagnet for the rotor and a commutator to achieve unidirectional rotation. Jedlik's motor was a shunt wound machine in which a moving electromagnet rotated within a fixed coil, the reverse of modern conventional motors. The wires powering the electromagnet protruded into two small semicircular mercury cups on either side of the shaft. This provided the required commutation as the wires picked up the current from alternate cups as the shaft rotated. Like many motors at the time, it had no practical application, however in 1855 Jedlik built another motor based on similar principles which was capable of carrying out useful work.


In 1861 he demonstrated a self excited dynamo but he did not publish his work. Subsequently Siemens, Varley and Wheatstone were credited with the invention.


Jedlik continued working on high voltage generators and spent his last years in complete seclusion at the priory in Gyór.


1828 Scottish engineer, James Beaumont Neilson patented the hot blast method of air supply to blast furnaces. Preheating the air blown into the furnace, enabled the efficiency of the iron ore smelting process to be improved.


See also Iron and Steel Making


1829 Nobili invents the thermopile, an electrical instrument for measuring radiant heat and infra red radiation. It was also based on the Seebeck effect as in Nobili's thermoelectric battery of three years earlier and consisted of a sensor made up from a bank of thermocouples connected in series which generated an electrical current in response to the heat radiation input. The current was measured by an astatic galvanometer, of Nobili's own design. With improvements from Melloni, it found extensive use in nineteenth century laboratories.


1829 French physicist Antoine-César Becquerel, father of a dynasty of famous scientists, developed the Constant Current Cell. The forerunner of the Daniell cell, it was the first non-polarising battery, maintaining a constant current for over an hour unaffected by polarisation. It was a two electrolyte system with copper and zinc electrodes immersed in copper nitrate and zinc nitrate electrolytes respectively, separated by a semi permeable membrane. It was left to Daniell to explain how it worked and thus to get credit for the idea.


1830 The invention of the thermostat made from a bi-metallic strip, usually brass and copper, was claimed by Andrew Ure a Glasgow chemistry professor. As a control device it did not find much use for 70 years until the advent of electricity supplies to the home when it could be used to operate a switch.

Note however that the bi-metallic strip used as a temperature compensating device in clocks and watches was invented by John Harrison in 1759. See Timekeepers.


1830 Joseph Henry in the USA worked to improve electromagnets and was the first to superimpose coils of wire wrapped on an iron core. It is said that he insulated the wire for one of his magnets using a silk dress belonging to his wife. An early example of insulated wire. In 1830 he observed electromagnetic (mutual) induction between two coils and his demonstration of self-induction predates Faraday, but like much of his work, he did not publish it at the time. An unfortunate tendency which he lived to regret. (See 1835 Morse)

The unit of Inductance the Henry is named in his honour.


1831 Faraday invented the solenoid and independently discovered the principle of Induction and demonstrated it in an induction coil or transformer. The induction coil has since been "invented" by many others (See 1886 William Stanley).

Faraday discovered that the motion of a magnet could induce the flow of electric current in a conductor in the vicinity of the moving magnet. He was the first to generate electricity from a magnetic field by pushing a magnet into a coil. He put this to practical use with his invention of the generator or dynamo, unshackling the generation of electricity from the battery. Faraday's dynamo, named the Faraday Disk after its construction, was a homopolar machine consisting of a copper disk rotating between the poles of a magnet. Current is generated along the radius of the disk where it cuts the magnetic field and is extracted via brushes contacting the shaft and the edge of the disk. See diagram. The Faraday Disk functions equally well as a motor and although the machine is said to be unique in that it is a direct current machine which does not need a commutator, it does owe something to Barlow's 1822 toothed motor design. (See also Siemens 1867).


From his experiments Faraday defined the relationship now known as Faraday's Law of Induction which describes how an electric current produces a magnetic field perpendicular to the direction of the current and, conversely, how a changing magnetic field generates an electric current in a conductor (normally a loop or a coil of wire with multiple turns, making a complete circuit) perpendicular to the field. The voltage generated at the terminals of the conductor is independent of how the change was produced. The change could be produced by moving the coil into or out of a magnetic field, rotating the coil relative to a magnet, changing the magnetic field strength or moving a magnet toward or away from the coil.

Faraday's Law states that the magnitude of the emf induced in a circuit is proportional to the rate of change of the magnetic flux that cuts across the circuit. It was left to Maxwell to express Faraday's Law and his notions of Lines of Force in mathematical terms.

The relationship can be stated as:

E= - N.dΦ/dt

Where:

E is the Electromotive Force (Voltage) induced in the coil.

N is the number of turns of wire in the coil.

dΦ/dt is the rate of change of the magnetic flux Φ passing through or enclosed by the coil.

The negative sign - signifies that polarity of the induced emf is such that it produces a current whose magnetic field opposes the change which produces it. (Lenz' Law).


Or alternatively:

E= - N.Δ(A.B)/Δt

Where:

Φ = (A.B)

and

B is the field strength of the external magnetic field.

A is area of the field enclosed by the coil.


Faraday's Law is the theoretical basis on which all modern electrical machines and tranformers are based.

See more about Michael Faraday


1831 Henry demonstrated a simple telegraph system sending a current through a mile and a half of wire to trigger an electromagnet which struck a bell (thereby inventing the electric bell, for many years the main domestic use of the battery). He used a simple coding system switching the current on and off to send messages down the line. Henry thought that patents were an impediment to progress and like Faraday he believed that new ideas should be shared for the benefit of the community. He subsequently freely shared his ideas on telegraphy with S. F. B. Morse who however went on to patent them passing them off as his own.


1831 -1835 Henry developed the relay which was used as an amplifier rather than as a switch as it is used today. At the end of each section, the feeble current would operate a relay which switched a local battery on to the next section of the line renewing the signal level. This enabled signals (currents) to be carried (relayed) over long distances making possible long distance telegraphy. In fact the relay reconstituted the signal rather than amplified it, just as the repeaters used in modern digital circuits do, thus avoiding amplifying the noise. The relay and its use with local battery power to "lengthen the telegraph line" were more of Henry's ideas which he failed to publicise or exploit.

Henry was appointed the first Secretary of the Smithsonian Institution when it was founded in 1846.


For over thirty years telegraphy was the main practical application of the battery, this new found electrical technology.


1832 After witnessing a demonstration of von Sömmering's electrochemical telegraph some time earlier, Baron Schilling an attaché at the Russian embassy in Munich, in turn developed the idea by making an electromagnetic device which he demonstrated in 1832. It was a six wire system which used the movement of five magnetic needles to indicate the transmission of a signal. This was the method subsequently used by Cooke and Wheatstone who later "invented" and patented the five needle electric telegraph for two way communications in 1837.


1832 Hippolyte Pixii built his "magneto generator" the first practical application of Faraday's dynamo. The term "magneto" means that the magnetic force is supplied by a permanent magnet. His first machine rotated a permanent magnet in the field of an electromagnet generating an alternating current for which there was no practical use at the time. The following year at Ampère's suggestion he added a commutator to reverse the direction of the current with each half revolution enabling unidirectional - direct current to be produced. Pixii's magneto liberated electrical experimenters from their dependence on batteries.


1833 Faraday published his quantitative Laws of Electrolysis which express the magnitudes of electrolytic effects and galvanic reactions, putting Volta's discoveries and battery theory on a firm scientific basis.

  • The amount of a substance deposited on each electrode of an electrolytic cell is directly proportional to the quantity of electricity passed through the cell.
  • Faraday's Constant, named in his honour, represents the electric charge carried on one mole of electrons. It is found by multiplying Avogadro's constant by the charge carried on a single electron, and is equal to 9.648 x 104 Coulombs per mole. It is used to calculate the electric charge needed to discharge a particular quantity of ions during electrolysis.

  • The quantities of different elements deposited by a given amount of electricity are in the ratio of their chemical equivalent weights.

With William Whewell, he also coined the words, electrode, electrolyte, anode (Greek - Way in), cathode (Greek - Way out) and ion (Greek - I go).


1833 Samuel Hunter Christie of the British Royal Military Academy publishes a bridge circuit for comparing or determining resistance, later to be called the Wheatstone Bridge.


1833 German physicist Wilhelm Eduard Weber, working with Gauss, demonstrated "the world's first electric telegraph" using a moving magnet and a coil of wire to send a signal along a wire suspended from a church spire in Gottingen to the other side of the town, a distance of 3 kilometers. One of many such claims before and since. The system used a simple coding scheme switching the current on and off, similar to Henry's, combined with reversing the polarity of the current to deflect a compass needle in opposite directions, to send different letters down a single wire. Over the subsequent years Weber investigated terrestrial and induced magnetic fields and verified the theoretical laws put forward by Ampère and others using electrical instruments which he designed for this purpose. The unit of Magnetic Flux is named the Weber in his honour.


1833 Russian physicist Heinrich Friedrich Emil Lenz formulated Lenz Law which states that an induced electric current flows in a direction such that the current opposes the change that induced it. A special case of the Law of Conservation of Energy. The law explains that when a conductor is pushed into a strong magnetic field, it will be repelled and that when a conductor is pulled out of a strong magnetic field that the magnetic forces created by the induced currents will oppose the pull. This also explains the phenomenon of back emf in electric motors, that is, the voltage created by the moving armature which opposes the applied voltage and hence the movement of the armature itself. Lenz law was later extended for more general application by Le Chatelier.

In the same year he also showed that the resistance of a metal increases with temperature.


1833 Scottish chemist Thomas Graham discovers the rate at which a gas diffuses is inversely proportional to the square root of the density of the gas. Now known as Graham's Law of Diffusion. Diffusion however is not confined to gases, it can take place with matter in any state. It may take place through a semi permeable membrane, which allows some, but not all, substances to pass. In solutions, when the liquid solvent passes through the membrane but the solute (dissolved solid) is retained, the diffusion process is called osmosis, a process which is used in many battery designs.


1833 British engineer Isambard Kingdom Brunel brought bad news to his father Marc Isambard Brunel about the "Gaz Engine" on which they had been working for 10 years. After consultations with Humphry Davy in 1823, the elder Brunel concluded that closed cycle hot air engines similar to Stirling's engine could be more fuel efficient than steam engines which lost a significant quantity of water in every cycle, an opinion which was shared by many at the time including Michael Faraday and the British Admiralty. He then began working on a closed cycle engine using "carbonic acid gas" (Carbon dioxide) which was relatively easy to liquefy under pressure. The engine had two reservoirs for the condensed gas which could be alternately heated (vaporised) and cooled by hot and cold water and these two gas sources were used to propel a double acting piston. The idea was patented in 1825 and, joined by the younger Brunel, they made several demonstrators using pressures up to 120 atmospheres. (The hot air engine had originally been conceived to avoid the explosions of high pressure steam boilers). Based on intuition, as were many inventions of the day, a huge amount of money was invested in the project. Eventually the younger Brunel was able to make use of early thermodynamic theories to justify the project. Unfortunately his conclusion in 1833 was that "No sufficient advantage on the score of economy of fuel can be obtained", and the project was abandoned.


1833 Undeterred by the experience of the Brunels (see previous paragraph above), flamboyant, Swedish born, engineer John Ericsson patented in Britain his "caloric engine" a double-acting external combustion hot air engine in which expansion occurs simultaneously on one side of the displacer piston with compression on the other. It was similar to a Stirling engine (patented in 1816) in which the displacer also acts as the power piston but it used an open cycle instead of a closed cycle design.

Ericsson had left his home country for England in 1826 where he entered a design for a railway locomotive in the Rainhill Trials. Although his design "Novelty" was the fastest in the competition, he lost out to Stephenson's Rocket on reliability grounds. Ericsson, an irrepressible self publicist and showman made extravagant claims for his caloric engine which he was not always able to substantiate.


His next ventures were a stream of inventions for naval applications including the ship's screw propeller, a variant of the Archimedes Screw, which he patented in 1836 (though earlier designs by Scottish inventors James Steadman (1816) and Robert Wilson (1827) and others existed but had not been patented). The superior efficiency of the screw propeller was demonstrated by the British Admiralty in 1845 in a competition between two similar sized Navy steam sloops, the Rattler with a screw propeller and the Alecto driven by paddle wheels. On a calm day in the North sea, coupled together stern to stern, they engaged in a "tug-of-war". The Rattler won, pulling the Aleco backwards at a speed of 2.8 knots. It was argued that this was not a fair trial since the Rattler's engines produced 300 horse power compared to only 141 horse power for those of the Alecto, but the Admiralty had already made up its mind and the spectacle gave them the convincing publicity they wanted.


Discredited by his failure to demonstrate the benefits claimed for the caloric engine and failing to interest the British Admiralty in the propeller and after a series of business losses and a spell in a debtors' prison Ericsson left Britain in 1839 for the USA where he continued to work on the caloric engine for 20 years. Though he sold many examples of his caloric engine, interest faded when he was unable to show its superiority to the steam engine. He was however more successful as a naval architect and munitions designer, his most famous design being the USS Monitor the "Ironclad" used to great effect by the Union's forces in the American Civil War (1861-1865).


1834 French clockmaker Jean Charles Athanase Peltier discovered that when a current flows through a closed loop made up from two dissimilar metals, heat is transferred from one junction between the metals to the other and one junction heats up while the other cools down. Used as the basis for refrigeration products with no moving parts. This is now known as the Peltier effect and is the reverse of the Seebeck effect discovered 13 years earlier.


1834 French engineer and physicist, Benoît Paul Émile Clapeyron published "Puissance Motrice de la Chaleur" ("The Driving Force of the Heat") in which he developed further Carnot's work on heat engines. He showed how the heat cycle relationship between the volume and pressure of the working fluid as well as the work due to expansion and contraction could be presented and analysed in graphical form.

He also showed that the work done on, or by, a working fluid such as steam can be determined using calculus. Thus:

W = ∫ PdV (integrated between the initial volume Vi and the final volume Vf)

where W is the work done on, or by, the steam, V is its volume and P is its pressure.


1835 German mathematician Carl Friedrich Gauss showed that the total of the electric flux flowing out of a closed surface is proportional to the total electric charge enclosed within that surface. The following relationship applies:

Φ = Q/ε0

Where:

Φ is the total flux of the electric field flowing out of the surface.

Q is the total electric charge enclosed by the surface.

ε0 is the electric constant or permittivity of the medium supporting the field.


Now known as Gauss's Law of Electric Fields, it is the electrical field equivalent of Ampère's Law for magnetic fields. It was not published however until 1867 together with Gauss's Law of magnetic fields.

Meanwhile Faraday, working independently, introduced the concept of capacitance with his definition of the dielectric constant ε, being equivalent to Gauss' permittivity.

See also the relevance to Maxwell's Equations.


Gauss also did pioneering work on probability and statistics, defining and characterising the Normal Distribution, now also named the Gaussian Distribution in his honour. It is the theoretical basis of much of today's quality control of which Six Sigma is an example.


Gauss was one of the worlds most gifted and prodigious mathematicians making major contributions to geometry, algebra, statistics, probability theory, differential equations, electromagnetics, and astronomy. Working alone for much of his life, Gauss' personal life was like Ampère's, tragic and complicated. His first wife died early, followed by the death of one of his sons, plunging him into a depression which was not helped by an unhappy second marriage which also ended with the early death of his second wife.


While he was working, when informed that his wife is dying Gauss replied: "Ask her to wait a moment - I am almost done."


1835 Samuel Finley Breese Morse, American artist and professor of the Literature of the Arts of Design in the University of the city of New York and religious bigot with a mandate directly from God, made a career change at the late age of 41 and started work on telegraphy. Undaunted by his lack of knowledge of the principles of electricity, he sought the assistance in developing his ideas, first from a colleague Leonard Gale of the University of New York who pointed out to Morse the need for insulation on the windings of his electromagnets, and then from Joseph Henry who already had a working telegraph system and who explained the need for relays to extend the range of the system. Morse subsequently patented Henry's ideas in his own name. He demonstrated the "first" electric telegraph in 1835 ignoring many prior claims dating as far back as Gray in 1729, Morrison's design of 1753 and Salvá's in 1804 as well as more practical recent inventions by Henry in 1831 and Weber in 1833.

Morse patented his system in 1837 and although it came after the needle telegraphs of Schilling (1832) and that of Cooke and Wheatstone (1837) which was patented earlier the same year as Morse's, Morse's system was simpler and more robust using only a single signalling wire plus a return wire and its use spread very quickly.


Morse subsequently claimed sole authorship for these ideas and also for the relay, another of Henry's inventions ignoring Henry's essential contributions to the system thus creating an irreparable rift with Henry. Similarly, the coding system Morse Code on which single channel telegraphy depends was based on existing technology including Henry's ideas, as well as those of Gauss and Weber, which Morse developed jointly with Albert Vail, Morse's business partner. It was Vail who invented the Morse key and also the printing telegraph which was patented in Morse's name. Their relative contributions are still in dispute. (See also 1841 Bain)

Henry is reported to have said in later life "If I could live my life again, I might have taken out more patents".


The Communications Revolution

Before the advent of the electric telegraph, communications had been limited by the speed of the fastest horse or the fastest ship. It took anything from four to six months to send a message from Britain to Australia and the same time to send a reply back. The telegraph reduced this to minutes, but it didn't just increase the speed of communications, it also dramatically increased the value of the information transmitted. Think of railway signalling which enabled safer movement of trains or military communications which gave commanders intelligence about the enemy's position and enabled rapid deployment of their own assets. Similarly, government or business administrators could monitor the status of remote operations giving them timely opportunity to intervene or to revise their own plans. Think also of commercial networks which could provide time sensitive commercial information to market traders or speculators giving them a competitive advantage.

The electric telegraph also facilitated both the gathering and dissemination of information and brought better understanding of unfamiliar people, places and communities, the first step towards the so called "Global Village".

Providing timely access to information, and the ability to communicate with remote locations transformed news reporting, knowledge of world events, trade, travel, warfare, diplomacy, administration and long range personal and business relationships much more dramatically than today's Internet Revolution.


See also the Transatlantic Cable.


For 35 years the battery was a solution looking for a problem. It had been used on a small scale as a laboratory tool providing the energy for electrolysis in the analysis of chemical compounds and the isolation of new elements but it was Morse's electric telegraph which eventually created the deployment of batteries on an industrial scale.


1835 Electric arc welding proposed by James Bowman Lindsay of Dundee. The idea was eventually patented fifty years later by Benardos and Olszewski in 1885.

Lindsay had many bright ideas, including the design for an electric light which he demonstrated in 1836 and several innovations in the field of telegraphy but none of these were ever commercialised.


1836 Demonstration by a British chemist John Frederic Daniell of the Daniell cell, a two electrolyte system using two electrodes immersed in two fluid electrolytes separated by a porous pot.

Volta's simple voltaic cell cannot operate very long because bubbles of hydrogen gas collect at the copper electrode acting as an insulator, reducing or stopping further electron flow. This blockage is called polarisation. Daniell's cell overcomes this problem by using electrolytes which are compatible with the electrodes. Thus the Zinc electrode is suspended in an electrolytic solution of Zinc sulphate which is contained in the porous pot (Initial designs used sulphuric acid rather than Zinc sulphate). The porous pot is in turn immersed in the copper sulphate solution which is contained in a glass jar into which the copper electrode is also suspended. The Daniell cell does not produce gaseous products as a result of galvanic action and copper rather than hydrogen is deposited on the cathode. Daniell's non-polarising battery was thus able to deliver sustained, constant currents, a major improvement on the Voltaic pile.

The Daniell cell chemistry was also available in other configurations which provide superior performance such as the gravity cell or crowfoot cell which eliminated the porous pot.

Daniell's cell was however based on a similar non polarising battery design demonstrated by Becquerel in 1829 which used nitrate electrolytes rather than the sulphate electrolytes used by Daniell. Despite the prior art, Daniell, rather than Becquerel, is remembered as the inventor of the non-polarising cell.


Early galvanic cells were all based on acidic electrolytes and many of these designs produced hydrogen at the cathode causing the cell to become polarised. Two approaches were adopted to solve the polarisation problem. Daniell's solution was a non-polarising cell which did not produce hydrogen. The other alternatives were depolarising cells containing oxidising compounds which absorbed the hydrogen as it was produced and did not allow the build up of bubbles. The Leclanché cell which uses manganese dioxide as a depolariser is an example of this type.


1836 Although it had been known for many years that some chemical processes could be speeded up by the presence of some unrelated chemical agent which was not consumed by the chemical action and that the phenomenon had been used by Döbereiner and others, it was Berzelius who in 1836 introduced the term catalyst and elaborated on the importance of catalysis in chemical reactions.


1836 Electric light from batteries was shown at the Paris Opera.


1836 Parisian craftsman Ignace Dubus-Bonnel was granted a patent for the spinning and weaving of glass. His application was supported by a small square of woven fibreglass. The drawn glass was kept malleable by operating in a hot vapour bath and weaving was carried out in a room heated to over 30°C.


1836 Irish priest, scientist, and inventor, Nicholas Joseph Callan, working at Maynooth Theological University in Ireland, invented of the induction coil. He discovered that by interrupting a low current through a small number of turns of thick copper wire making up the primary winding of an induction coil, a very high voltage could be induced across the terminals of a high turns secondary winding of thinner copper wire on the same iron core. Such induction coils are used in the automotive industry to operate the sparking plugs, but in the other industries they are generally known as Ruhmkorff coils.

The importance of Callan's pioneering work was not recognised at his remote institution which had other priorities and he never received recognition for this invention which is now associated with the name of German-born Parisian instrument maker, Heinrich Ruhmkorff. Like all instrument makers, he put his name on every instrument he made and Callan's coil eventually become known as the "Ruhmkorff Coil".

Callan also developed a galvanic cell known as the Maynooth Battery in 1854.


1837 Faraday discovers the concept of dielectric constant, invents the variable capacitor and states the law for calculating the capacitance. The capacitance of a parallel plate capacitor is given by:

C = ε.A/d

Where:

C is the capacitance.

A is the area of the two plates.

ε is the permittivity (sometimes called the dielectric constant) of the material between the plates.

d is the separation between the plates


The unit of Capacitance, the Farad, is named in Faraday's honour.

See more about Faraday.


1837 Sixteen years after the principle was demonstrated by Faraday, self taught American blacksmith Thomas Davenport patented the first practical electric motor as "an application of magnetism and electro-magnetism to propelling machinery." Powered by a galvanic battery consisting of a bucket of weak acid containing concentric cylindrical electrodes of dissimilar metals, the motor was a shunt wound, brush commutator device. The magnetic field of the stator was provided by two electromagnets. Two further electromagnets formed the spokes of a wheel which acted as the rotor. The commutator reversed the polarity of the rotor electromagnets as they passed the alternate north and south poles of the stator to create unidirectional rotation. It was granted the first ever patent for an electrical machine.


Davenport's "revolutionary" invention was ahead of its time and it did not bring him the commercial success his efforts deserved. At the time, the lack of suitable batteries or any other source of electrical power to drive the motor inhibited its adoption and his persevering endeavours to improve and promote the motor led him into bankruptcy. His pioneering use of electromagnets in both the stator and the rotor of his machine went largely unnoticed until the idea was reinvented simultaneously by Varley, Siemens and Wheatstone in 1866 for use in their designs for dynamos. It was not until forty years after Davenport's invention that the demand for electric motors eventually took off. Unfortunately Davenport didn't live to see it. He died aged 49 in 1851.


1837 Patent granted for a Needle electric telegraph (Two way electric communications) conceived by William Fothergill Cooke, a retired English surgeon of the Madras army studying anatomy at the University of Heidelberg, and refined by physicist Sir Charles Wheatstone of King's College, London. (See 1816 Ronalds) This was claimed to be the first practical battery powered telegraph, however it is very similar to Schilling's design of 1832. An elegant design, instead of using one wire for each letter it used only five signalling wires plus a return wire. By using a combination of the five signalling needles the number of wires could be reduced. When activated, the needles pointed to individual letters on a board. Twenty different letters could be identified by only five wires. There was no provision for sending the letters C, J, Q, U, X and Z. The design was overtaken by the simpler single wire system devised Morse using his coding system of dots and dashes. The relationship between Cooke and Wheatstone eventually ended acrimoniously over a dispute about their respective contributions to the design.


In 1839, Cooke and Wheatstone's telegraph was installed on Brunel's Great Western Railway where, on 1 January 1845, it was successfully used to enable the apprehension of murderer John Tawell fleeing from the scene of his crime on a train travelling from Slough to Paddington. After he boarded the train a telegraph message was sent from Slough, alerting police in London who were able to arrest him on arrival at his destination. It was an event which stirred the public interest in telegraphy which up to that time had been regarded as no more than a scientific curiosity.


Wheatstone claimed many inventions in his lifetime, usually some time after they had been invented by somebody else. Apart from the needle telegraph see the electric clock , punched tape and the dynamo. At least he acknowledged that the Wheatstone Bridge was invented by somebody else.


1837 First commercially available insulated wire made by British haberdasher W. Ettrick who adapted silk wound "millinery" wire, used in hat making, for electrical purposes. The same year William Thomas Henley made a six head wire wrapping machine for manufacturing silk insulated wire and founded Henley Cables.


1837 James W. McGauley of Dublin invented the self acting circuit breaker in which the electric current moved an armature which opened the circuit switching off the current. When the current was removed the armature moved back to its original position and switched on the current once more causing the armature to oscillate and the current to be switched rapidly on and off. The same year American inventor Charles Grafton Page built a similar device which he called a rocking magnetic interrupter. The original purpose of these devices was to provide current pulses to the primary of an induction coil causing repetitive high voltage sparks at the terminals of the secondary winding. This trembler mechanism was subsequently widely used in electric bells, buzzers and vibrators.


1838 Scottish engineer Robert Davidson built a DC electric motor based on iron rotor elements driven by pulses from electromagnets in the stator. It was the first example of what we would now call a switched reluctance motor. The motor comprised two electromagnets one on either side of a wooden rotor and three axial iron bars equally spaced around the periphery of the rotor. The electromagnets were switched on and off in turn by means of a mechanical commutator driven from the rotors.

Davidson used four of these motors to drive a 5 ton electric locomotive on the newly opened Edinburgh/Glasgow railway in 1842 reaching a speed of 4 mph over a distance of one and a half miles.

The vehicle was powered by two large batteries constructed from wooden troughs each with 20 cells containing sulphuric acid in which were suspended zinc and iron electrodes. The motor speed was controlled by lowering or raising the electrodes into and out of the acid. A resin sealant protected the wooden cells from attack by the acid.

Like Davenport's motor, Davidson's motor was also ahead of its time and was not developed into a practical product. The more efficient electromagnetic rotors and stators as pioneered by Davenport, became the norm and the reluctance motor was forgotten. It was however revived in the 1960s when new semiconductor technology made electronic commutation possible and, because of its simplicity, the reluctance motor finds many uses today.


1838 Carl August von Steinheil a German physicist discovers the possibility of using the "earth return" or "ground return" in place of the current return wire for the signal in telegraph circuits thus enabling communications using a single wire.


1839 Steinheil builds the first electric clock.


1839 Welsh lawyer Sir William Robert Grove demonstrates the first Fuel Cell. Attempting to reverse the process of electrolysis by combining hydrogen and oxygen to produce water, he immersed two Platinum strips surrounded by closed tubes containing Hydrogen and Oxygen in an acidic electrolyte. His original fuel cell used dilute sulphuric acid because the reaction depends upon the pH when using an aqueous electrolyte. This first fuel cell became the prototype for the Phosphoric Acid Fuel Cell (PAFC) which has had a longer development period than the other fuel cell technologies.

The same year Grove also demonstrated an improved two electrolyte non-polarising galvanic cell using zinc and sulphuric acid for the anodic reaction and platinum in nitric acid for the cathode. Known as the Grove cell it provided nearly double the voltage of the first Daniell cell. Grove actually developed a rechargeable cell however there were few facilities for recharging at that time and the honour for inventing the secondary cell eventually went to Planté in 1860. Grove's nitric acid cell was the favourite battery of the early American telegraph systems (1840-1860), because it offered high current output. However it was found that the Grove cell discharged poisonous nitric dioxide gas and large telegraph offices were filled with gas from rows of hissing Grove batteries. Consequently, by the time of the American Civil War (1861-1865), Grove's battery was replaced by the Daniell battery.

In later life (1880) Grove became a high court judge.


1839 The Magnetohydrodynamic (MHD) Generator proposed by Michael Faraday.


1839 Prussian engineer Moritz Hermann von Jacobi financed by Czar Nicholas makes first electric powered boat using 128 Grove cells. He also formulated the law known as the Maximum Power Theorem or Jacobi's Law which states: "Maximum power is transferred when the internal resistance of the source equals the resistance of the load". Also known as Load matching.


In 1838 von Jacobi also discovered electroforming by which duplicates could be made by electroplating metal onto a mould of an object, then removing the mould. This galvanic process was used for making duplicate plates for relief or letterpress printing when it was called electrotyping.


1839 Alexandre-Edmund Becquerel discovered the photovoltaic effect when he was only nineteen while experimenting with an electrolytic cell made up of two metal electrodes placed in an electrically conducting solution. He noticed that small currents were generated between the metals on exposure to light and these currents increased with the light intensity. This new source of electricity never had the same impact as the Volta's cells since the currents were small and the phenomenon was largely ignored by the scientific community. 100 years later Becquerel's discovery was recognised as the first known example of a P-N junction. See also Becquerel 1896


1839 Polystyrene isolated from natural resin by German apothecary Eduard Simon however he was not aware of the significance of his discovery which he called Styrol. Its significance as a plastic polymer with a long chain of styrene molecules was recognised by Staudinger in 1922.


1840 James Prescott Joule an English brewer published "On the Production of Heat by Voltaic Electricity" showing that the heat produced by an electric current is proportional to I2R now known as Joule's Law. He also discovered that electrical power generated is proportional to the product of the current and the battery voltage and he established that the various forms of energy, mechanical, electrical, and heat - are basically the same and can be changed, one into another. Thus he formed the basis of the law of Conservation of Energy, now called the First Law of Thermodynamics. See also Joule's work on refrigeration.


1840 Robert Sterling Newall from Dundee patented a wire rope making machine suitable for manufacturing undersea telegraph cables. It was used to make the first successful telegraph cable connecting England and France in 1851 and later with others the first transatlantic telegraph cable. The cable was insulated with gutta-percha, the adhesive resin of the isonandra gutta tree, introduced to Europe in 1842 by Dr. William Montgomerie, a fellow Scot working as a surveyor in the service of the East India Company. Gutta percha was used for 100 years for cable insulation until it was eventually replaced by polyethylene (commonly called polythene) and PVC.


1840 Electroplating, a process discovered by Cruikshank forty years earlier, was re-invented by the Elkingtons of Birmingham and commercialised by Thomas Prime. Articles to be plated were suspended as one electrode in a bath containing an electrolyte of silver or gold dissolved in cyanide. When the voltage was applied to the electrodes the metal was deposited on the suspended article.


1840 Eminent British mathematician and Astronomer Royal, George Biddell Airy, develops a feedback device for continuously manoeuvring a telescope to compensate for the Earth's rotation. Problems with his mechanism led to Airy becoming the first person to discuss instability (hunting or runaway) in closed-loop control systems and the first to analyse them using differential equations. Stability criteria were later established by Maxwell.


Feedback control systems were not new. The list below gives some examples from earlier times:

  • 270 B.C. Greek inventor and barber Ktesibios of Alexandria invented a float regulator to keep the water level in a tank feeding a water clock (the clepsydra - Greek water thief) at a constant depth by controlling the water flow into the tank.
  • 250 A.D. Chinese engineer Ma Chun invented the cybernetic machine, also called the south pointing carriage, models of which can be found in several museums throughout the world. Based on connecting the wheels through a system of differential gears to a pointer, usually in the form of a statuette with an outstretched arm, the pointer always points south no matter how far the carriage has travelled or how many turns it has made. Legend has it that a Chinese general used south pointing chariots to guide his troops against the enemy through a thick fog.
  • 1620 Dutch engineer living in England Cornelius Drebbel invented the thermostat for his stove. It depended on the expansion and contraction of a liquid to move a damper which controlled the air flow to the fire.
  • 1745 Scottish blacksmith and millwright Edmund Lee added a fantail to the moveable cap of the windmill, perpendicular to the main sails, to keep the main sails always pointing into the wind.
  • 1759 English clockmaker John Harrison used a bi-metallic strip to compensate for temperature changes affecting the balance springs in his clocks. As the temperature rises the bi-metallic strip reduces the effective length of the balance spring to compensate for its expansion and change in elasticity.
  • 1787 English carpenter Thomas Mead regulated the speed of rotation of a windmill using the displacement of a centrifugal pendulum to control the effective area of the sails.
  • 1788 James Watt designed the centrifugal flyball governor to control the speed of his steam engines by adjusting the steam inlet valve.

Considering his track record, Airy surprisingly held the post of Astronomer Royal, the highest office in the British civil service, for forty six years. Filled with his own self importance he belittled the work of those whom he considered his social inferiors such as Faraday whose mathematics, in his view, wasn't up to scratch and John Couch Adams who predicted the existence and orbit of the planet Neptune and whom Airy ordered to proceed slowly and re-do his calculations "in a leisurely an dignified manner". Consequently Airy missed its eventual discovery which was scooped by Frenchman Urbain Jean Joseph Le Verrier.

In his role as chief scientific advisor to the government he put a premature end to Babbage's pioneering work on computers with his verdict, "I believe the machine to be useless, and the sooner it is abandoned, the better it will be for all parties", which cut off all government funding for the project.

Airy also advised against the construction of the Crystal Palace to house the Great Exhibition of 1851 because he said the structure would collapse when the salute guns were fired. Despite Airy's objections, it was built anyway and was a great success.

After the Tay Bridge disaster in 1879 when the bridge collapsed into the river during a storm killing all 75 passengers on the train passing over it at the time, the subsequent investigation found that Airy, who who provided the wind loading for designer Thomas Bouch, seriously miscalculated the effect of a Tayside gale on the structure, and that the bridge would have fallen "even if construction had been perfect".


1840 "Steam Electricity", electrostatic discharges produced by the frictional electrification of water droplets, observed by a colliery "Engine Man" near Newcastle in England when probing a steam leak. The phenomenon was investigated by local lawyer, (later to be engineer and arms manufacturer), William Armstrong who constructed what he called a Hydro-Electric Generator using the effect to produce electrostatic charges on demand. It consisted of a boiler insulated from the ground generating a jet of steam from which sparks could be drawn on to an insulated metallic conductor. The conductor became positively charged, while the boiler acquired a negative charge.

See also Kelvin's Thunderstorm for an explanation.


1841 The non-polarising Carbon-Zinc cell, substituting the cheaper carbon for the expensive platinum used in Grove's cell, invented by German chemist Robert Wilhelm Bunsen. His battery found large scale use for powering arc-light and in electroplating.


Bunsen did not invent the eponymous burner for which he is famous. The basic burner was in fact invented by Faraday and improved by Peter Desaga, a technician working for Bunsen at the University of Heidelburg. The improved burner was designed to provide the high temperature flames needed for Bunsen's joint studies of spectroscopy with Kirchhoff and Desaga was smart enough to manufacture and sell the new device under his boss's name.


Bunsen never married. He was a popular teacher who delighted in working with foul smelling chemicals. Early in his career he lost the use of his right eye when an arsenic compound, cacodyl cyanide, with which he was working, exploded.


1841 Scottish clockmaker Alexander Bain invented the first pendulum electric clock. Bain demonstrated his clock to Charles Wheatstone who copied the clock and three months later demonstrated it to the Royal Society claiming it as his own invention. Fortunately, unknown to Wheatstone, Bain had already patented the invention.

Bain also proposed a method of generating electricity to power his clock by means of an earth battery. This consisted of two square plates of Zinc and Copper, about two feet square, buried deep in the ground a short distance apart forming a battery with the earth acting as the electrolyte. Such an arrangement produces about one volt continuously.


1842 Austrian physicist Christian Andreas Doppler explained that the apparent frequency of waves as experienced by an observer depends on the relative motion between the observer and the source, the wavelength being shorter for an approaching source and longer for a receding source. He used the analogy of a ship sailing into or retreating from the waves to explain his hypothesis, but sceptics were not convinced and so in 1845 he set up an experiment to demonstrate the effect. He arranged for a trumpeter to ride on an open train carriage and, as a reference, for two trumpeters to be positioned (stationed) in a railway station. All three trumpeters were to hold the same note as the train passed through the station. His experiment verified that the pitch of the moving trumpet heard by an fixed observer at the station was higher than the pitch of the stationary trumpets as the train approached the station and lower than the stationary trumpets as the train was leaving the station. Known as the Doppler effect it was shown by Fizeau in 1848 that the effect also applied to light (electromagnetic) waves.


The principle of the Doppler effect is used extensively today in Radar applications and highway speed traps to determine the speed of moving objects by measuring the frequency shift of signals bounced off the speeding vehicles.


1843 Alexander Bain patented a device to scan a two-dimensional surface and send it over wires. Thus, the patent for the fax machine and the first use of scanning to dissect and build up an image was granted 33 years before the patent was given for the telephone. Over a period of five years Bain designed and patented many improvements to the electric telegraph including the use of punched tape (re-invented by Wheatstone and sold to Samuel Morse in 1857) which were widely adopted at the time. Unfortunately he derived no financial benefit from his ideas. His efforts and his money were spent in pursuing patent infringements by Samuel Morse and he retired into a life of obscurity, poverty and hardship.


1843 The first computer program was written by Augusta Ada Byron, Countess of Lovelace, to calculate values of a Bernoulli function. Known as Ada Lovelace she was the beautiful daughter of romantic English poet Lord Byron and wife of the Earl of Lovelace, one of Byron's many scandalous relationships which shocked Victorian England. At the age of 14 she was tutored by famous mathematician Augustus De Morgan at the University of London and became the world's first software engineer. Convinced of her own genius she let everybody know it at every opportunity. She worked as an assistant to Charles Babbage on the development of his "analytical engine" the world's first programmable computer which used punched cards for input and gears to perform the function of the beads of an abacus.

Before Babbage, computing devices were mostly analogue, performing calculation by means of measurement, Babbage's machine however was digital, performing calculation by means of counting. It is claimed that Ada originated the concept of using binary numbers, a practice used in all modern computers, however Babbage's difference engine and more versatile analytical engine were both based on the decimal numbering system. Her notes indicate that she understood and used the concepts of a stored program, as well as looping, indexing, subroutine libraries and conditional jumps, the first use of logic in a machine, however the extent of Babbage's contribution to these thoughts and how much was her own work is not clear. She wrote "The Analytical Engine ... weaves algebraic patterns, just as the Jacquard-loom weaves flowers and leaves."


Though her contribution to computer technology may be questioned, her charm did wonders for Babbage's PR (although it didn't quite work on Michael Faraday. See More).

Ada however managed to run up considerable gambling debts with her lover John Crosse and as a solution she applied her mathematical prowess to fresh fields developing a winning "system" for betting on horses (proving, incidentally, that genius and common sense don't always go hand-in-hand). Unfortunately, the horses being unaware of their responsibilities, the system didn't win and Ada finished her life as a bankrupt, alienated from her family, addicted to laudanum (opium disolved in strong alcohol), dying a painful death from cancer of the cervix at the age of 36, repeating the demise of her father, also an opium addict who died of a fever at same age of 36.


Babbage did not have the financial resources to complete his machines and he appealed to the Prime Minister Robert Peel for help, but after taking advice from the formidable Astronomer Royal Sir George Airy, the request was turned down and his machines were never finished. In 1991 the British Science Museum completed the construction of Babbage's Difference Engine No.2 from Babbage's original drawings with new components and it worked just as he said it would, performing its first test calculation for the public, the powers of seven (y=x7) for the first 100 values.


1843 Sir Charles Wheatstone "found" a description of the Christie's 1833 bridge circuit, now known as the Wheatstone Bridge, and published it via the Royal Society though he never claimed he invented it.

The same year Wheatstone also invented the Rheostat (Greek - "Rheo" Flowing stream) variable resistor.


1843 Patents for the vulcanisation of natural rubber with Sulphur to improve its strength, wearing properties and high temperature performance were awarded to Thomas Hancock in England in May 1843 and one month later to Charles Goodyear in the USA. Subsequently patents for hard rubber called vulcanite or ebonite, created by using excess sulphur during vulcanisation, were granted to Hancock in England in 1843 and to Nelson Goodyear (brother of Charles) in the USA in 185.

Ebonite is a hard, dark and shiny material initially used for jewellery, musical instruments, decorative objects and dental plates (with pink colouring) for nearly 100 years. It is also a good insulator and soon found use in electrical equipment and power distribution panels.

Ebonite was a milestone because it was the first thermosetting material and because it involves modification of a natural material.

Ebonite mouldings were exhibited by both Hancock and Goodyear at the Great Exhibition of 1851.


1843 German founder of modern electro physiology Emil du Bois-Reymond discovered that nerve impulses were a kind of "electrical impulse wave" which propagated at a fixed and relatively slow speed along the nerve fibre. In 1849, using a galvanometer wired to the skin through saline-soaked blotting paper to minimise the contact resistance, he was able to detect minute electrical discharges created by the contraction of the muscles in his arms. Realizing that the skin acted as an insulator in the signal path, he increased the strength of the signals by inducing a blister on each arm, removing the skin and placing the paper electrodes within the wounds. He determined that a stimulus applied to the electropositive surface of the nerve membrane causes a decrease in electrical potential at that point and that this "point of reduced potential", or impulse, travels along the nerve like a wave.


Galvani's theory of animal electricity vindicated at last? See also nerve impulses.


1845 Michael Faraday discovers that the plane of polarisation of a light beam is rotated by a magnetic field. The first experimental evidence that light and magnetism are related. Now called the Magneto-Optic effect or the Faraday effect.


1845 Gustav Robert Kirchhoff a German physicist at the age of 21 announced the laws which allow calculation of the currents, voltages, and resistances of electrical networks. In further studies, based on Kelvin's mathematical representation of the circuit elements, he demonstrated in 1857 that current flows through a conductor at the speed of light.


Between 1855 and 1863 Kirchhoff formed a productive working partnership with Robert Bunsen at the University of Heidelburg where they undertook the first systematic investigation of atomic spectra. They discovered the that the flames of each element had a unique emission and absorption visible light spectrum and founded the science of emission spectroscopy for analysing and identifying chemical substances. They invented the spectroscope which allowed them to analyse not only laboratory samples, but also the Fraunhofer lines in cosmic light spectra and by comparing them with the dark lines in the spectrum of earthly elements they could determine the composition of the Sun and the stars by spectral analysis of the radiation they emit.

These achievements were forty years before the discovery of the electron. A more comprehensive theory taking into account the structure and quantum nature of the atom was eventually developed by Niels Bohr in 1913


After an accident in early life, Kirchhoff spent most of his working life in a wheelchair or on crutches.


1845 Two thousand years after Archimedes explained the mechanical advantage of the compound pulley system, English lawyer William George Armstrong invented the first major enhancement of the original design, a hydraulic jigger for improving the efficiency of dock-side cranes which he demonstrated at Newcastle's "Lit and Phil". It was the converse of Archimedes' block and tackle and used high pressure water from the municipal water supply to operate a hydraulic ram which Bramah had shown to be capable of exerting very high forces. Pulley blocks were attached to the ram's piston and to the case of the ram at the opposite end and a cable or chain was looped around the pulley sheaves and connected to the load. The pressure of the water forced the piston out of the ram thus forcing the pulleys apart, the opposite of a conventional block and tackle which pulls them closer together. Depending on the number of sheaves, the jigger's pulley system magnified the stroke of the ram, increasing the displacement of the lifted load, but reduced the force pulling the load, whereas the basic pulley system magnified the lifting force but reduced the displacement of the lifted load. The load was lowered simply by releasing the water from the ram.

Armstrong's system eliminated the need for costly manual labour to operate the old block and tackle system and provided a smooth lift and greatly increased the speed at which the load could be lifted. It was immediately successful and led to a string of new hydraulic applications including hoists, capstans, turntables, dock gates, rock crushing and even passenger lifts.


Armstrong's interest in hydraulics had been inspired by his role as a lawyer involved in the legal aspects of the provision of municipal water supplies and also by his first view of a waterwheel in action which he encountered while on a fishing trip. As an amateur he had made models of hydraulic systems while still working as a lawyer, but at the age of 37, in 1847 he made a major career change abandoning his Newcastle law practice to start an engineering works at Elswick-on-Tyne, to manufacture hydraulic cranes, where he could work full time on engineering projects.

This was the modest start of Britain's greatest Victorian enterprise.


Armstrong was a great innovator. His next invention, in 1850, was the hydraulic accumulator which was designed to overcome the problem of low, or variable, water pressure for his hydraulic machinery. It provided a controllable high pressure hydraulic source and comprised of a large water-filled cylindrical reservoir with a piston onto which a heavy weight of several tons of concrete or metal could be loaded to increase and maintain the pressure of the water. In 1865 he installed two blast furnaces to manufacture his own castings.


His next venture was to use his considerable engineering skills to revolutionise the design and manufacture of armaments for the British army.

During the Crimean War (1853-1856), he was prompted by reports from the Battle of Inkerman (1854) describing the difficulties caused by the manoeuvrability of the British field guns. It took 150 soldiers and 8 officers three hours to manhandle two smooth bore 18 pounder field guns each weighing 2.1 tons (2134 kg) across one and a half miles (2.4 kms) of rough and muddy, ridged terrain to get them from their siege park to a strategic, elevated defensive position on Home Ridge from which the 100 attacking Russian guns 1300 yards (1200 m) away on Shell Hill would be in range. Meanwhile, until the guns were in place, the British troops, outnumbered by more than 3 to 1, were extremely vulnerable to enemy fire, suffering appalling casualties and loss of life.


Note: Pounders - The size of the guns was specified as the weight in pounds (0.454 kg) of the projectile it fired. After 1864, the capacity of the larger guns was specified as the diameter or calibre of the bore.


The Guns

While the presence of the two 18 pounders at Inkerman was successful in turning the tide of the war, Armstrong felt that it should not be necessary to have a gun weighing over two tons to fire an eighten pound projectile. He believed he could design something much lighter with even better performance by applying the experience he had gained in manufacturing precision hydraulic rams to the development of large field guns. He also recognised that the heavy artillery design, favoured by the military, had not much changed in over 200 years with muzzle loading bronze or cast iron barrels prone to blowing up. Cast iron was fine for making hydraulic rams but it was not suitable for containing the explosive loads found in gun barrels. Attempted breech loading designs had also been too weak and dangerous, failing to withstand the explosion of the charge. On the other hand small arms producers had taken advantage of new materials, skills and technologies developed during the Industrial Revolution to introduce wrought iron rifles and breech loaders firing conical shells and percussion caps replacing smooth bored, cast iron muzzle loading muskets firing round shot.


Artillery development had just not kept pace with small arms development.

What was needed was a scaled up version of the rifle.


Spurred on to come up with a solution by his friend James Rendel, chief civil engineer of the British admiralty who provided practical insight into the issues involved, Armstrong called upon the advice of James Nasmyth and Isambard Kingdom Brunel, who had both shown an interest in weapons development, to help him in this task. The result was a series of breech loading field guns with rifled steel barrels which were lighter, more accurate with greater range than the army's muzzle loading cast iron and bronze cannons, and improved projectiles to use in them. It was a major step in artillery development.


In 1855 the War Office (Now called the Ministry of Defence - How times change.), seeking ideas for improved artillery, received almost 1000 proposals from which Armstrong was selected to produce six prototypes.


Design challenges and solutions included:

  • Construction
  • Conventional field guns or cannon used heavy barrels with thick walls of bronze or cast iron to contain the explosive firing charge and to direct the projectile on its way, but cast iron is brittle with a crystalline structure and has poor tensile strength so the castings had to be very large. Large castings are also susceptible to flaws and cracks. Bronze is softer but that means it wears much more quickly than cast iron. Armstrong's barrels were built up from layers of more flexible and durable wrought iron or steel, each with properties or characteristics optimised for its task. Early designs used an inner tube, or core, forged from solid bars of wrought iron heated to a high temperature and wrapped round a mandrel and forged together to form the lining of the barrel. Subsequently in 1863, mild steel, toughened in oil, was used to manufacture the barrel's core because it had better wear characteristics. The tensile strength needed to contain the explosive charge was obtained by shrinking and welding cylindrical wrought iron rings over the inner tube. The diameter of the rings when cold was slightly less than the diameter of the inner tube, but when heated they expanded and could be slipped over the inner tube. On cooling the interior of the barrel became under compression from the rings shrunk over it. Thicker outer tubes, or more layers, were used near the breech where pressure from the detonation of the charge was greatest. This "Built up" or laminated structure provided a "pre-stressed" barrel. Inward pressure from the outer tube, or tubes, compressed the inner tube, and during firing, counteracted the outward radial forces exerted on the barrel by the explosive charge when the gun was fired. The result was that the barrel, the heaviest part of the gun, could be much smaller and lighter than in previous guns. This construction was later adopted in 1866 by Alfred Krupp in his "ringed gun".

    Added benefits were that the stronger barrel allowed the cannon to withstand more powerful explosions from larger charges of gunpowder so that greater speed and energy could be imparted to the projectile or larger projectiles could be used. The size and weight reduction also enabled much larger guns to be produced.

    The "Build up" construction method was one of the keys to the success of the gun. It's composite structure allowed the gun to be designed to exploit the properties of different materials to create a structure whose strength was greater than the strength of the individual parts.


  • Projectile
  • The second major factor contributing to the gun's success was the design of the projectile. It was well known that using an elongated shell with a conical tip rather than round shot would increase the range since the wind resistance encountered by a projectile increases with its cross-sectional area. For the same weight an elongated shell will have a lower cross-section and hence lower wind resistance. To provide directional stability and prevent the shell tumbling end over end or deviating from its course the gun must impart a spin to the shell as it leaves the gun and this is done by rifling the barrel.

    Rifling also placed requirements on the projectiles. They must be a tight fit in the barrel and engage with the rifled grooves. For this reason shells with a soft metal casing such as lead are required. Armstrong's shells were hollow, containing an explosive charge which was not unusual for the period, but a soft metal hollow shell would be prone to collapsing due to the explosive forces during firing.

    The shells were therefore made from cast iron with a thin deformable lead coating so that its diameter was slightly more than the calibre of the gun. When the gun was fired the lead engaged, and was crushed, in the barrel's rifling grooves imparting the necessary spin to the shell. This tight fit had the added advantage of minimising the windage losses (See below) in the gun barrel thus increasing the range.


  • Propellant Charge
  • The gunpowder propellant charge used to accelerate the shell was contained in a cloth bag which was loaded directly behind the projectile.


  • Windage
  • Windage is the narrow gap between a gun's bore and the projectile's diameter which was necessary in smooth bore, muzzle loading guns to allow for crude manufacturing tolerances of the cast iron projectiles and to allow the projectile to be rammed down the length of the barrel on loading. Windage also referred to the amount of hot propellant gas that escaped around the loosely fitting projectile on firing. This effect reduced the volume and pressure of the gas accelerating the projectile, seriously reducing the gun's range. Traditional cannon firing spherical cannon balls were particularly wasteful.

    On the positive side, the flash of the escaping propellant gas passing around the shell provided a self-igniting fuse when used with explosive cells, avoiding the need to light the fuse before loading the shell.

    Armstrong's tight fitting rifled shells however did not suffer from windage. All of the propellant gas generated by the explosive charge was applied to the projectile increasing the range of the gun or allowing smaller firing charges to be used. It also meant that, without the hot flash, another method of initiating the shell's fuse had to be found. (See below).


  • Rifling
  • The method of rifling was Armstrong's third major innovation. A projectile's range, accuracy and stability are improved by spinning it around its axis as it emerges from the muzzle so that the gyroscopic forces due to the spin stabilise its orientation and keep it on track during its flight to the target. This is achieved by machining helical grooves, called rifling, along the length of the gun barrel to impart spin to the projectile as it emerges from the muzzle. This puts conflicting demands on the material used for the gun barrel. It must be very hard to resist the wear caused by friction with the projectiles used. This would suggest the use of cast iron, but because cast iron is very hard, it is difficult to machine. Its tensile strength is also too low, unless the casting is very thick, to absorb the pressures of the explosive charge and is brittle and prone to cracking. While bronze castings are much easier to machine, they are too soft and the rifling would soon be damaged. It was wrought iron which made rifling possible - being harder than bronze and having higher tensile strength than cast iron, it made rifled barrels more practical.

    Rifling also affected the design of the projectiles which had to be compatible with the rifling in the barrel.


    See more about Whitworth and alternative rifling.


  • Breech Loading
  • Breech loading was necessary because the alternative of loading a rifled gun through the muzzle was very difficult, but it also had other advantages, the main one being a faster rate of fire. Loading the gun from the rear leaves the crew less exposed to enemy fire and also allows smaller gun emplacements or turrets.

    These advantages were well known at the time but existing designs were unreliable, unsafe and unpopular. The bore of Armstrong's gun was closed by a metal block or "vent piece" which was dropped into a slot and kept in place by a large screw. It was an improvement on current practice but still not perfect and in a few cases vent pieces had been ejected at high speed from the breech.


  • Muzzle Loading
  • Because of the extremely high explosive forces encountered in high calibre guns and the greater consequences of a failure, Armstrong did not consider the safety margin of the breech loading mechanism to be sufficient for guns larger than his 110 pounders. He therefore reverted to muzzle loading for higher calibre guns.


  • Fuses
  • With the elimination of windage, Armstrong had to find a new safe method of self igniting the fuses in his explosive shells. He designed a variable delay fuse, initiated on the shell's exit from the barrel and timed to explode before the shell hit the target to cause fragmentation damage. The shell contained a suspended hammer which was released by the shock of firing to ignite the primer, initiating the timing sequence. Shells designed to explode on impact to increase the blast damage to, or the penetration of, the target caused by the shell used a percussion fuse in the nose of the shell to initiate the explosion.


  • Materials
  • Armstrong's gun, like all guns, was subject to extreme tensile, compression, shock, vibration and abrasive forces as well as temperature extremes and the selection of optimum materials was important for their success. Manufacturing processes included steel making, casting, welding, forging and precision machining. The behaviour of the explosive charges used had to be controlled.

    In the 1850s, process control was rudimentary and the quality of the materials used was often inconsistent. Metallurgy was in its infancy and there was very little, if any, published data about the strength of materials.

    Armstrong spent months testing different materials to understand the factors influencing their performance to enable him to optimise their use and to ensure they were fit for purpose. He even tested a variety of chemical additives to the explosive charges to ensure a safe, optimum burn rate of the charge.


Armstrong Guns - Performance


In 1855 the first trial gun delivered to the War Office for testing was a 3 pounder firing cylindrical shaped lead shot and weighing 560 pounds. It was disparaged by the War Office's Ordnance Committee as being too small for use on the battlefield though they conceded that it had improved accuracy, range and power. Undeterred, Armstrong bored out the barrel to carry a 5 pound cast iron shot coated with lead, following up the next year with an 18 pounder.


In 1859 after four more years of discussions with sceptical military men and unfriendly rivalry from Joseph Whitworth, a competing arms manufacturer, new tests of larger guns, under service conditions, were arranged. Armstrong's 18 pounders demonstrated three times the range and 57 times better accuracy at the same distance than the Army's 18 pounders. The reloading time was substantially reduced and their higher speed projectiles carried more destructive power. Furthermore with a weight of only 0.6 tons (610 kg) they were over 70% lighter than the cumbersome 18 pounders used at Inkerman.

The Army officers present were astonished and Armstrong's gun was rapidly approved by the War Office, going into service the same year.


Armstrong suddenly became a national hero. He was made Engineer of Rifled Ordnance to the War Department and given a large order for guns.

The War Office recognised the importance of Armstrong's gun technology but were concerned that the technology could be acquired by the foreign armies. Armstrong in turn was worried that the War Office would eventually transfer the production of his gun to its own munitions factory at Woolwich Arsenal. Between them, they negotiated a long term contract which protected Armstrong's Elswick gun making business and in return Armstrong gave his 11 patents for ordnance and projectiles to the government. In recognition of this gesture he was awarded a knighthood. As Armstrong had feared, procuction at Woolwich was ramped up using using his patents and government contracts for guns from Elswick were severely cut back. Fotunately he was able to more than make up for the loss by selling overseas.


He went on to produce breech loading guns in various sizes ranging from 6 pounders to 110 pounders weighing 4 tons but for larger sizes (150, 300 and 600 pounders) he reverted to muzzle loading, considering breech loading to be too risky and dangerous.

In 1887 he produced a "Monster" gun weighing 111 tons (112,000 kg) with a calibre of 16.25 inches (413 mm) and a total length of 43ft 8in (13.3 m). Designed for use on warships it had an effective range of 8 miles (12.9 km). Its 1800 pound (816 kg) shells emerged from the muzzle at a speed of 2020 feet per second (2,217 km/h), and could penetrate wrought iron to a depth of 30.6 inches (777 mm) at a distance of 1000 yards (914 m).


The Ships

In 1867 Armstrong's company expanded into fitting out warships, a logical progression since the navy already used Armstrong's hydraulics for handling their big guns. He negotiated a venture with the local shipbuilding firm of Mitchell & Swan who would make warships at their Walker yard 6 miles down river, while Armstrong would provide the guns.

Unfortunately there was low bridge across river between the two factories blocking the passage of large ships. He solved the problem by designing a Swing Bridge, operated by his hydraulic rams, rotating on a pivot at the centre of the river to let the ships through. The bridge was opened in 1873 and is still in operation today.

In 1894 Armstrong also designed the hydraulic mechanism that operated London's Tower Bridge.


In 1882 Mitchell & Swan merged with Armstrong's company to form Armstrong, Mitchell & Co. and a new shipyard specialising in warship production was built at Elswick next to the armaments works, together with a new steelworks with two Siemens open hearth furnaces. When it was completed the Elswick works covered 50 acres extending for over a mile along the north bank of the River Tyne and employed 11,000 rising to 13,000 during peak loads. It was the only shipyard which could build a battleship including all its armaments. Armstrong also opened a manufacturing plant in Italy. Between 1881 and 1897, 42 warships were produced at the Elswick works.


By 1897 Armstrong, Mitchell purchased the engineering firm of their old rival Joseph Whitworth who had died 10 years earlier. By now Whitworth's employed 2000 men, compared with Armstrong Mitchell's 20,000 and had added toughened steel armour plate and gun mounting mechanisms to their product line which neatly complemented Armstrong's output. Armstrong Whitworth became one of the world's greatest manufacturing companies.

Armstrong's weapons and ships were bought by armies and navies all over the world from Russia, China and Japan to Argentina, Chile and the United States, where he supplied both armies in the American Civil War (1861-1865), bringing him immense wealth.

Though Armstrong died in 1900 his company still prospered and supplied vital armaments during World War I including 13,000 big guns, 100 tanks, 47 warships, 140 converted merchant ships, 1,000 aeroplanes, 3 airships, 14,500,000 shells, 18,500,fuses and 21,000.000 cartridges. What would Europe look like today if had not had Armstrong's technology to challenge Germany's mighty Krupp?


Cragside, Armstrong's home in Northumberland, was a showcase for his ingenuity. In 1878 it was the world's first private dwelling to be fitted with electric lights (apart from the homes of the various inventors of rival electric lights). Initially it was lit by carbon arc lamps powered by a hydroelectric generating system of his own design, also a world's first. Electric power was supplied by a 4.5kW, 90 Volt Siemens dynamo, belt driven by a 6hp Vortex inward flow reaction turbine locally manufactured to a design by James Thomson, elder brother of Lord Kelvin. The turbine was fed with water from an artificial lake created for the purpose in the grounds of Armstrong's estate. The power plant was located 1320m (almost a mile) from the house and current was transmitted through bare copper wire with a round trip of 2.6km. In 1880, the carbon arc lamps were replaced by 45 incandescent lamps, recently invented by his friend Joseph Swan. Then four years later, in the first of many upgrades, the generating capacity was increased to power 92 lights using a Compton dynamo delivering 90 Amps at 110 Volts, driven by a 24hp (17.9 kW) Vortex turbine.

Other domestic gadgets included central heating by means of warm air ducted to the rooms, an electric bell system to summon staff to their stations, a hydraulic lift to provide access to the upper rooms and a water powered roasting spit in the kitchen.


Armstrong was a hard task master but also a generous philanthropist funding many public works in his native Newcastle.

When he died aged ninety in 1900 he was worth £1,400,000 (£160 million in today's money).


1846 The Smithsonian Institution established in the USA, "for the increase and diffusion of knowledge among men" with a large endowment from English chemist and mineralogist, James Smithson, in neat symmetry with the founding of the Royal Institution in England by the American, Count Rumford. Joseph Henry was chosen as the Smithsonian's first distinguished Secretary. Smithson never visited the United States but after he died his remains were brought there for burial.


1846 From his experiments on magneto optics Faraday discovered that some substances such as heavy glass and Bismuth are repelled rather than attracted by magnets and named the phenomenon diamagnetism. Using the analogy with dielectrics and conductors he made the distinction between diamagnetics - "poor conductors of magnetic force" and paramagnetics - "good conductors of magnetic force".


1846 The birth place of the modern oil industry was Baku in Azerbaijan, then part of the Soviet Union, where the first "modern" oil well was drilled in 1846 by local mining engineer V. Semyonov. It was followed by others in Bobrka in Poland (1854), Bucharest in Romania (1857), Lambton County, in Ontario, Canada (1858) and Titusville in the USA (1859). Except for the 1857 Canadian well which was originally dug by hand, all of these so called "modern" wells used the same percussion drilling techniques, also called cable tool drilling, that the Han Chinese had pioneered in their oil fields 2000 years before.


In 1898, the Russian oil industry exceeded the U.S. oil production level and by 1901, Baku produced more than half of the world's oil.


Though it was not the first, the Titusville oil well drilled by Edwin Laurentine Drake in 1859 is usually considered to be the West's first commercially viable source of oil.

Drake's is a sad story. An ex railroad conductor with no engineering or drilling experience he had retired from the railroad at the age of 38 due to ill health. Around the same time, the Pennsylvania Rock Oil Company had been formed to exploit oil deposits which were seeping from land in various locations, particularly around Titusville in Pennsylvania, but financial difficulties caused the break up of the company which re-emerged with a low capital base as The Seneca Oil Company.

In 1858 Drake invested in Seneca Oil and he was hired by them with a salary of $1,000 per year. Giving him the nickname of "Colonel" to impress the local residents, Seneca Oil sent him to Titusville to investigate the oil deposits there. He set about building a drilling rig based on traditional percussion drilling methods but using a steam engine for repetitively raising the heavy drill bit. He devised improvements for drilling through the bedrock, housing the bit in an iron pipe to prevent the borehole from collapsing but the work took longer than expected. When Seneca Oil, having invested $2,000 in what appeared to be a dry hole, refused to provide any more capital to purchase essential equipment, Drake used his own money to fund the work. After many difficulties and scorn from the locals he struck oil in August the following year at a depth of 69½ feet (21 metres). Almost immediately Drake's methods, which he failed to patent, were copied by others in the vicinity and America's oil boom was launched.


Unfortunately Seneca Oil did not pay Drake's salary for more than two years, eventually paying him off in June 1860 with a payment of $2,167. By 1862, much more productive wells, had come on stream causing the price of oil to drop and Seneca Oil with its original low capacity wells went out of business. The man who had made countless people very rich died in poverty, an invalid, confined to a wheelchair at the age of 61.


1847 Ignoring the difficulties encountered with previous experimental Atmospheric Railways including the Croydon railway by built by William Cubitt in 1846, as well as warnings from experienced engineers such as Daniel Gooch and Robert Stephenson, in 1847 Isambard Kingdom Brunel launched his his own atmospheric railway connecting Exeter with Newton Abbot in Devon, a distance of 20 miles (32 km).


This system did not use heavy locomotives on the track to pull the carriages. Instead the carriages were pulled along by a piston moving in a pipe laid between the tracks. A large stationary engine ahead of the train pumped air out of the pipe and the pressure differential between the partial vacuum in front of the piston and the atmospheric pressure behind it caused the piston to move along the pipe. The piston was connected to the floor of the carriage by means of a plate which slid in a slot at the top of the pipe and the vacuum was maintained by airtight leather flaps, rivetted to the pipe, which opened as the plate passed through and closed again after it passed. Brunel's railway used 15 inch (381 mm) pipes on the level sections, and 22 inch (559 mm) pipes for the steeper gradients. Pumping stations were situated every three miles along the line and trains could run at 20 miles per hour (32 km/h).


The advantages of this system were that there were no heavy locomotives on the track, the stationary engines were more efficient, more reliable and easier to maintain, there were fewer problems with traction on the gradients, and the passengers would not be subject to the noise and smell of the steam engine.

Disadvantages were mainly associated with the seal around the piston and, more importantly, maintaining the vacuum seal in the slot which was its Achilles heal. Apart from wear and tear, the leather flaps were attacked by vermin and damaged by frost in the winter. Various lubricants were tried to keep the leather supple including cod oil, soap, beeswax and tallow but the problems with the seals remained. Less serious problems were the inconvenience of decoupling the carriages from the piston at the end of each section and reconnecting them to the piston in the next section. Furthermore the trains could not be run in reverse. Running costs however were another major problem. It was calculated that Brunel's atmospheric traction cost 3s 1d per mile (£0.10/km), compared to 1s 4d (£0.04/km) for conventional steam power.


In view of these insurmountable difficulties the project was abandoned in 1848 after only one year and the line returned to conventional locomotive haulage. The shareholders in the system had lost £500,000.


1848 Scottish physicist, born in Belfast, William Thomson (later elevated to "Lord Kelvin") established the basis for an absolute temperature scale. Starting from the experimental results of Charles and Gay Lussac, Kelvin showed also that there is an absolute zero of temperature which is -273°C. The absolute temperature scale is named the Kelvin scale in his honour and -273°C is called 0°K or absolute zero.


Kelvin was an infant prodigy in mathematics, entering Glasgow University at the age of ten, he started the undergraduate syllabus when he was only fourteen and published his first scholarly papers, correcting errors in the works of both Fourier and Fourier's critics, when he was only sixteen. Fourier remained an inspiration to him throughout his early years. Kelvin always sought practical analogies to explain his theories and published over 600 scientific papers on mathematics, thermodynamics, electromagnetics, telecommunications, hydrodynamics, oceanography and instrumentation and he filed 70 patents. He is remembered for his work on the Transatlantic Telegraph Cable but he initially gained fame by estimating the age of the Earth from a knowledge of its cooling rate at over 100 million years (later revised and broadened from 20 to 400 million years) in contradiction of the prevailing religious, creationist view of the World. Despite this he maintained a strong and simple Christian faith throughout his life and engaged in a long running public disagreement with Charles Darwin, remaining "on the side of the angels", claiming that, according to his calculations, the age of the Earth was too short for Darwin's evolutionary changes to have taken place. (Current estimates give the age of the Earth as 4.6 billion years taking into account the heating effect of radioactivity of the Earth's core, something of which Kelvin could not have been aware). He remained actively involved in scientific work until he was 75 but in later life he found it difficult to accept Maxwell's theories, for which he himself had been the Genesis, and the concept of radioactivity.


According to Kelvin's biographer Charles Watson, "During the first half of Thomson's career he seemed incapable of being wrong while during the second half of his career he seemed incapable of being right."


1849 John Snow, a London-based obstetrician and anaesthetist, published a paper, "On the Mode of Communication of Cholera", in which he proposed that cholera was not caused by breathing "bad air" (noxious vapours) or a miasma in the atmosphere which was the conventional view, but was in fact a water-borne infection carried by germs, and that clean water was essential for preventing disease.

Cholera was a major global scourge in the 19th century with frequent large-scale epidemics in European cities, primarily originating in the Indian subcontinent, with 100,000 deaths on the island of Java alone. Known as the Blue Death, since its victims turned blue, cholera could kill within four hours and had no known cause or cure. The symptoms were the extreme pain and dehydration caused by the loss of three to five gallons (10 to 20 litres) of bodily fluids from diarrhoea and vomiting which appeared between two hours and five days of falling ill.

At that time, most people believed that cholera was caused by airborne miasmas, noxious vapours containing particles of decaying matter or human waste that were characterised by their foul smell. Traditional prevention methods by wearing masks filled with fragrant herbs or flowers or clearing the air by burning scented woods and tar or washing and painting walls and floors were all ineffective. Similarly no currently available cures or treatments for the disease such as bleeding and rehydrating with water or taking medicines such as laxatives, opium, peppermint, brandy and strong herbs, had any effect.


Brought to England by sailors, cholera first appeared in Sunderland docks in October 1831 where the first two cases were traced to boatmen Robert Henry and William Sproat at the local docks who both died within three days after falling ill and in Britain. After that 32,000 people died of cholera in 1831 and 1832. Subsequently cholera appeared in other ports and a third epidemic occurred between 1846 and 1860 and by 1854, a further 23,000 had died in the UK. People were blaming hospitals for spreading the disease but it was also found in deep coal mines which typically had squalid conditions due to the severely cramped conditions and difficulty of keeping the working environment clean with neither water supply nor drains. In these cases it was thought that the disease was possibly spread by person to person contacts. Quarantine was also introduced in an attempt to control the spread of the disease.


Snow was a meticulous researcher who established the science of epidemiology. In 1832, at the age of 19, when he was a surgeon-apothecary apprentice at Newcastle upon Tyne, he had encountered a cholera epidemic for the first time in Killingworth, a nearby coal mining village where he gained experience treating many victims of the disease.

In 1848 when a new outbreak of cholera struck London he set about investigating the transmission of the disease in more depth. He learned that the first victim, John Harnold, a merchant seaman, had arrived from Hamburg in September and rented a room in London where he had quickly developed symptoms of cholera and died within a few days. Snow decided to track the progress of the disease to see if he could determine exactly how it was spread.


He observed that cholera was a disease of the bowel and not a respiratory disease of the lungs making it unlikely that the cause was harmful fumes of bad air. Instead he thought it was probably due to the quality of the local water supply. Like most cities, London's water supplies and sewer systems were unsanitary and relatively primitive in those days. People didn't have running water or modern toilets in their homes. Water was mostly drawn from local wells and waste water, as well as human waste, were typically thrown into the street, into cesspits or into the river Thames. He suspected that the local water wells were being contaminated by water leaking from open drains and nearby cesspits. Investigating further he discovered that areas in which waste water flowed towards the wells had high incidences of cholera while areas where waste water flowed away from the wells were relatively cholera free. Similarly, he was aware that water supplies were also drawn from the Thames, even though sewage was also dumped into the river, and the downstream areas which were likely to more polluted with sewage, had a higher incidence of cholera than upstream areas which had cleaner water supplies.

Unfortunately the conclusions in Snow's 1849 paper initially, had little more effect than traditional quack cures as doctors and scientists thought he was on the wrong track and stuck with the popular belief of the time that cholera was due to miasmas. To overturn the miasma theory he needed more compelling evidence.


In 1854 when another more serious cholera epidemic struck the United Kingdom he thoroughly investigated each case in the Soho district of London where he then lived. He interviewed the sick and their families and pinpointed the incidences of cholera on a street map of London, searching for a correlation with the places from which the patients had obtained their drinking water. The map showed a large cluster of cholera deaths within walking distance of the district's water pump on Soho's Broad Street and he was thus able to identify the pump as the source of the epidemic. He also investigated possible anomalies in the results such as "those who were expected to die but didn't" and "those who were not expected to die but did", which could have raised doubts about the validitry of the conclusions.

He discovered that surprisingly, brewery workers living around the Broad Street water pump had remained immune to the 1854 outbreak. Prefering beer over water they drank only beer which was produced in a heated process using water from the brewery's own independent well. Similarly there were almost no cases in a workhouse (prison) with 535 inmates near the pump. This was because the workhouse also had its own well and bought water from a different water works. Other more distant outliers, resident near other water works, who had unexpectedly contracted cholera were found to have received their water supply from the notorious Broad Street Pump because they liked its taste or for some other personal convenience. (See a copy of Snow's Cholera Map).

Although Snow could not identify the water borne germs, under his primitive microscope, his map provided evidence of their presence. We now know to be the bean-shaped bacteria Vibrio cholerae that thrive in water contaminated by faeces.

It later turned out that the water from the pump was polluted by sewage contaminated with cholera from a nearby cesspit.


Later in the year, Snow took his cholera map to the town officials to convince them that this public water source had to be closed. The officials were reluctant to believe him, but as a trial, they removed the handle from the pump, making it impossible to draw water and found that the number of new cases began to drop dramatically. Despite the evidence, public health experts still believed in the miasma theory, and the handle of the water pump was replaced and Snow's germ theory did not become accepted until 1866.


Snow's theory was validated in 1865 by Louis Pasteur whose experiments showed that microbes (germs) were the cause of infections and he also explained why. This conclusion was reinforced, in 1883 by German physician, Robert Koch, who took the search for the cause of cholera a step further when he isolated the bacterium Vibrio cholerae, the poison or germs that Snow contended caused cholera. Dr. Koch determined that cholera is not contagious from person to person, but is spread only through unsanitary water or food supplies.

The 19th century cholera epidemics in Europe and the United States ended after cities finally improved water supply sanitation and today, scientists consider Snow to be the pioneer of public health research and the applications of epidemiology.


As an anesthetist, Snow was one of the first to determine the proper doses of chloroform and ether and to design devices and masks to apply them safely. In 1853 he was chosen to attend the birth of Queen Victoria's eighth child Prince Leopold. He prescribed the use of chloroform as the anaesthesic to be used for pain relief during the prcedure despite resrvations by many in the medical profession concerrned about the safety of this new drug. The queen inhaled the chloroform from a handkerchief which had been soaked in the anaesthetic and was delighted with its effect. Subsequent publicity contributed to the public acceptance of anesthesia.


Snow was a vegetarian and a teetotaller who tried to drink only distilled water that was "pure". Despite his clean living, he died at the age of 45 from a premature stroke brought about by complications resulting from his experiments with anaesthetics which he tested on himself. in 1858 problems arose from anesthetic experimentation, which subsequently caused his premature stroke.


1849 President Abraham Lincoln was granted a U.S. patent number 6469 for a device for lifting riverboats over shoals [shallow water], the only U.S. president ever to have been awarded a patent. Part of his application read, "Be it known that I, Abraham Lincoln, of Springfield, in the county of Sangamon, in the state of Illinois, have invented a new and improved manner of combining adjustable buoyant air chambers with a steam boat or other vessel for the purpose of enabling their draught of water to be readily lessened to enable them to pass over [sand] bars, or through shallow water, without discharging their cargoes...".

The device was never manufactured.


1849 The first accurate terrestrial measurement of the speed of light was made by French physicist Armand Hippolyte Louis Fizeau. Previous measurements had been based on observations of the movement of planets and moons by Danish astronomer Ole Christensen Rømer (1676), English astronomer James Bradley (1728) and others. Fizeau directed a beam of light through the gaps in a rotating cog wheel to a mirror several miles away and observed the reflection of the pulses of light coming back through gaps in the wheel. Depending on the speed of rotation of the wheel, the returning light would either pass though the gap or be blocked by a tooth. The speed of light could be calculated from the distance to the mirror, the number of teeth on the wheel and its rate of rotation. He determined the speed of light to be 186,000 miles per second or 300,000,000 metres per second.

Also known as Einstein's constant, the speed of light is represented by the symbol c for "celeritas" (Latin - "speed").

Fizeau also showed that the Doppler effect also applied to lightwaves.


1849 The Bourdon tube pressure gauge was patented by French engineer Eugene Bourdon. It is still one of the most widely used instruments for measuring the pressure of liquids and gases of all kinds, including steam, water, and air up to pressures of 100,000 pounds per square inch as well as pressures below atmospheric. It consists of a "C" shaped or spiral curved tube sealed at one end which tends to straighten out when a pressurised fluid is admitted into it. The displacement of the end of the tube is used to move a pointer or other indicator.


1850 Prussian born theoretical physicist Rudolf Julius Emmanuel Clausius publishes his seminal paper "On the Mechanical Theory of Heat" establishing the study of Thermodynamics and outlining the basis of the Second Law. In 1865 Clausius defined the notion of entropy.


1850 The trembler electric bell invented by John Mirand.


1851 In his treatise "On the Dynamical Theory of Heat." Kelvin formally states the Second Law of Thermodynamics, that "Heat does not spontaneously flow from a colder body to a hotter". It was later restated in the form "In a closed system entropy can only increase", recognising the concept of entropy proposed by Clausius in 1865.


1851 Joseph Whitworth, one of Britain's great Victorian engineers first came to the public's attention with his exhibits of precision engineering at the Great Exhibition of 1851 in London. They included his bench micrometer based on precision flat planes and a measuring screw which he claimed (possibly somewhat dubiously) could measure to an accuracy of one millionth of an inch (0.000001 in ≈ 0.025 µm). He also showed the BSW screw thread standards named after him and a range of precision machine tools he had built. These exhibits provided the foundations necessary for mechanisation, for the manufacturing interchangeable parts and for mass production.


In the early nineteenth century machines were very basic and often powered by hand or by a foot treadle. There were no standard measures, parts would have to be individually engineered and each workshop had its own techniques and references. Nuts and bolts were hand made and expensive. They would be made to fit as a pair and were not interchangeable. In 1830 a good workman could typically achieve an accuracy of one sixteenth of an inch but that was all changed by Joseph Whitworth who raised the standards of accuracy in manufacturing to a degree previously unknown, revolutionising the manufacturing of mechanical parts and the production of armaments.


Whitworth, born in 1803, was fostered out at the age of 11 after the death of his mother. He received only elementary education and on leaving school he became an indentured apprentice for four years in a cotton mill after which he he worked for another four years as a mechanic in a factory in Manchester. At the age of 22 he moved to London where he managed to find a job working for Henry Maudslay, inventor of the screw-cutting lathe. His experience there was invaluable. Maudslay set the highest standards of precision and workmanship which were readily assimilated by Whitworth.

In 1828 Whitworth left Maudslay's to work at Charles Holtzapffel's machine shop, moving on again in 1830 to join Joseph Clement, another eminent London toolmaker, where amongst other things he worked on Charles Babbage's difference engine until the government funding was withdrawn in 1832.


Whitworth's Machines and Tools

In 1833 he returned to Manchester and opened his own business developing and manufacturing machine tools for steam engines, for the cotton and textile industries and for the fledgling railway system. They included precision tools for turning, shaping, milling, slotting, gear cutting and drilling and Whitworth became renowned for his high standards of accuracy and workmanship.


In 1834 he filed his first independent patent for improved precision screw cutting machinery which speeded up the manufacturing of nuts and bolts, dramatically reducing the costs, while at the same time improving the accuracy and thus enabling interchangeable parts.

He was passionate about setting high measurement and workmanship standards and took the accuracy of Maudslay's reference surface planes to another level by scraping rather than grinding, publishing the results in 1840 in his first paper "Plane Metallic Surfaces or True Planes". Applications of his true planes and measurement systems were shown at the 1851 Great Exhibition to great acclaim.


In 1841 Whitworth produced a paper recommending a rationalised universal system of screw threads. The angle between the V groove of the thread was a standard 55 degrees and the depth and pitch of the thread were in constant proportion. The number of threads per inch was specified for different diameters screw diameters. The proposal became known at the Whitworth thread. Its adoption by the Woolwich Arsenal, the government's main munitions factory, quickly followed by the railway companies, who until then had all used their own screw thread designs, led to its widespread acceptance and by 1858 it was in universal use in Britain and several other countries, though it was not formally approved by the British Board of Trade as a national standard until 1880.


The USA adopted a different standard based upon a 60 degree thread form proposed by William Sellers in 1864 and these were subsequently developed into the American Standard Coarse Series (NC) and the Fine Series (NF).

After 1945, the requirements of international trade created the need for international, rather than national, standards. This need was reinforced by problems experienced during World War II, caused by the lack of interoperability between the equipment of the different allied armies. In response, international standards for American Unified and ISO Metric threads were defined and after 1948 the British Standard Whitworth BSW thread standard was gradually replaced.


BSW standards are however still widely used today for some applications, though not in the US, such as the British Standard Pipe thread and also in some photographic camera fittings.


In 1849 Whitworth lodged 15 patents for machine tools.


The Whitworth Rifle

Whitworth's exhibits at the Great Exhibition had earned him the reputation as a manufacturer of machines of unrivaled quality and precision. Two years later as Britain was becoming concerned about the possibility of war in Crimea his talents were called upon by the government's War Office who asked him to provide equipment for the mass production of their standard issue Enfield Rifle. This was based on the French, muzzle loading Minié rifle design and produced at the government's Royal Small Arms Factory at Enfield near London. Whitworth was cautious, never having made a firearm before. The Enfield was notoriously inaccurate and unreliable. Why would he want to be involved in making such a questionable product? He suggested instead that he would carry out research into a new weapon to replace it. This offer was turned down, but by 1854, when Britain had entered the war, Whitworth was once more approached by the War Office. In response he proposed to undertake a series of trials to analyse the factors contributing to the rifle's performance and to determine the best manufacturing methods before production could start. This would take about three years - one year to construct an experimental shooting gallery suitable for housing the experiments and two years to carry them out. There was little support for this approach from military officers who believed that battles were won with heavy artillery, not with rifles. The project was eventually approved after Whitworth pointed out that he would use the experiments on small arms to better understand the principles involved, particularly rifling, then this technology could be scaled up for heavy artillery.


The main problem to be solved centred around the rifle barrel. In 1854 no rifle barrel had ever been bored from a solid metal rod and there were no suitable tools for doing it. It was not possible to drill a deep narrow bore since the available drill bits deviated uncontrollably from the desired path. Drilling from both ends of the rod was no better as there was no guarantee that the two bores would meet in line. Consequently, all small arms barrels were produced by blacksmiths swaging a wrought iron strip lengthways around a cylindrical metal rod, called a mandrel, with a diameter matching the desired bore of the gun barrel. The strip was heated until white hot then hammered by hand, using a 5 pound (2 kg) hammer, into a curved swage block mounted on an anvil, to form a long "U" shaped channel. Then after further heating, to maintain the white hot temperature, the "U" shaped channel was gradually curved around a mandrel by further hammering until the opposite edges met. Continued hammering in the presence of a flux was required to fuse the edges together to form a seamless, solid tube. This was highly labour intensive and required great skill from the blacksmith. With such a primitive process, it was difficult to control the accuracy of the results and individual barrels had wide variations in the diameter of the bore and the straightness of barrel. It was at least possible to clean out the bore using a plate or spade drill but this did little to improve the accuracy.


This was the method used to manufacture the Enfield rifle barrels. A "generous" tolerance was permitted on the bore diameter to allow for muzzle loading. Even if the barrel was straight, at the low side of the tolerance it was possible that the projectiles jammed in the bore. Using the traditional ram rod to push the a lead projectile down the bore damaged its point destroying its aerodynamic shape. At the high side of the bore tolerance the projectile would be loose in the bore as it exited so that there was no control over its direction.


Whitworth's next challenge was the rifling. It was already known that rifling the gun barrel's bore provided spin stabilisation which improved a projectile's range, accuracy and stability. In 1835 English gun maker William Greener developed a self expanding bullet and later mechanical fit bullets for this purpose. His muzzle-loading shotguns and rifles had been demonstrated at the Great Exhibition where he was awarded a gold medal.

  • Self Expansion Projectiles
  • The Greener bullet was designed to fit easily into the muzzle for easy loading, but to expand due to the explosive charge when fired so that it would engage with the rifling while at the same time reducing the windage, the wasted energy of the charge leaking past the bullet. The reduction in windage allowed more of the energy of the explosive charge to be transferred to the bullet, however some of this gain was lost due to the increased friction between the bullet and the barrel due to the rifling. Greener's bullet was a two part projectile with a hollow base fitted with a plug which forced the base of the bullet to expand on firing.

    The Enfield rifle was based on Greener's ideas. It used a version of the Minié ball invented by French Army officer Claude-Étienne Minié in 1846. It was a conical bullet with lead skirting containing three exterior grease-filled grooves around its circumference and a conical hollow in its base containing an iron plug which expanded the bullet on firing.

    William Armstrong's big guns also used a variation on this design.


  • Mechanical Fit Projectiles
  • Soft lead coated bullets were prone to fouling up the gun barrels and Greener proposed mechanical fit bullets to overcome this. This enabled the use of harder, more destructive, steel bullets and shells, however it needed tighter tolerances and much more precise dimensional control of the barrel rifling and of the bullet shape to avoid the possibility of jamming and to keep the windage losses to a minimum. Few gun makers were able to deliver this accuracy on a repeatable basis. This was the route taken by Whitworth. Instead of machining narrow slots in the barrel to guide the oversize bullets, he chose to make the cross section of the bore hexagonal in shape with the spin being imparted by a helical (often incorrectly called "spiral") twist to the bore along the length of the barrel. The corresponding bullets would also need to have a hexagonal cross section fitting snugly into the barrel.


In 1854 Whitworth began his test programme to find ways of designing a superior rifle which solved the above problems. The first task was to build the shooting range. It was a brick tunnel 500 yards (457 m) long 20 feet (6 m) high with a tiled roof and a level concrete base. This was not ready until the following year, after which tests began.


Like all his work, Whitworth's comprehensive test programme was meticulous and methodical. Rather than producing a series of trial prototypes as was typical at the time, he investigated and measured every aspect of gun design changing one element at a time to determine its affect on performance. This included, the weight and composition of the explosive charge, the length, bore and weight of the barrel, the weight and shape of the bullets, self expansion and mechanical fit bullets, different types of rifling and spiral turn rates and the manufacturing methods and tolerances needed to produce the parts.


For his initial test sample of hexagonal rifling, he used six thick metal strips pressed together under great heat to form a hexagonal tube held together by external hoops. The desired helicality (usually called "spirality") was achieved by heating and twisting the tube. For ballistics testing he also had to improvise since no suitable high speed test equipment was available. To track the trajectory of the projectiles he suspended tissue paper screens every 30 yards to record their path and to determine the spin behaviour he used bags full of bran to catch bullets in mid flight.

The tests showed that the friction of expansion bullets caused an efficiency loss of 20% to 21% in the barrel whereas mechanical fit bullets in a hexagonal barrel suffered only a comparable loss of 2% to 3%. In summary he demonstrated that using polygonal rifling with a fast helical turn rate and mechanical fit bullets gave more accuracy, longer range, higher impact energy with a lower explosive weight than current self expansion bullets.


For production, in 1857 Whitworth devised a more accurate method of deep boring rifle barrels from solid. Up till that time the only drill bits available were spade or plate bits welded to a narrow rod to allow the swarf to pass. They had poor axial location, tending to wander, and poor swarf removal. He solved both problems simultaneously by extending the length of the spade and giving it a twist over its whole length which improved its rigidity and helped remove the swarf. This was the first simple twist drill.


In 1861 a more practical twist drill bit which we would recognise today was invented by American engineer Steven A. Morse. It used a more substantial steel rod, with a diameter equal to the hole to be bored to avoid the problem of wandering. Early Morse twist drills were made by cutting straight parallel grooves on opposite sides, along the length of the rod, then heating and twisting it to form the helical grooves. The following year another American engineer, Joseph R. Brown, invented the first fully universal milling machine which was used to cut the helical flutes in twist drills.


After boring, an adjustable broach was passed down the bore to cut the necessary hexagonal rifling in the bore.

Whitworth's first prototype rifle incorporating all his new found knowledge and his elongated hexagonal bullets were ready in 1857 as planned, but this was one year after the Crimean war had ended.


Trials were carried out the same year to compare the Whitworth and Enfield rifles and the performance of the Whitworth rifle was superior in every way. It was able to hit the target at a range of 2,000 yards, where the Enfield was only able to hit the same target at a range of 1,400 yards. It was also lighter, it used a smaller charge and permitted a faster reload rate than the Enfield and it was also the first rifle with the capability to shoot a bullet through a 0.6 inch (15 mm) wrought iron plate (at a reduced range of 20 yards). This accuracy was also demonstrated at the British National Rifle Association meeting in 1860 when Queen Victoria fired the first shot from a Whitworth rifle mounted on a fixed rest hitting the bullseye of a target set up 400 yards away.

Despite these successes the British government rejected the design because the calibre of the Whitworth barrel was smaller and more prone to fouling than the Enfield, and the Whitworth rifle also cost approximately four times as much to manufacture. Nevertheless, Whitworth was able to sell the rifle to others including the French army, and also to the Confederate army during the American Civil War (1861-1865) where it was highly valued for its range and accuracy. Between 1857 and 1865 the company sold 13,400 rifles.


Whitworth Artillery

Starting in 1859 Whitworth took the design principles he had learned about rifles and applied them to the design of heavy artillery. At the same time he developed a new breech loading mechanism to complement the design. Trials over the next three years indicated that cast iron or hard steel gun barrels had a tendency to fracture or explode when they were unsound, whereas gun barrels made from ductile steel would more likely be deformed and less dangerous. Casting flawless ductile steel however was very difficult mainly due to the presence of tiny voids or air pockets within the ingot. Whitworth's solution for improving the bursting strength of the guns was to construct the barrels from solid steel. Concerned about the possibility of air bubbles in Bessemer steel he applied extreme hydraulic pressure to the fluid metal during the casting process followed by similar hydraulic pressure, rather than the steam hammer, for forging. The method, which he called "fluid-compressed steel", was patented in 1865 and the metal produced was known as "Whitworth steel".


Between 1859 and 1862, he produced breech loading 3, 12, 32, 70, 120 and 130 pounder guns with excellent performance but the War Office were not impressed.

In formal shoot out trials against Armstrong's big guns in 1859, Whitworth's 3 pounder (1.4 kg) shot, at a range of 5.25 miles (8.5 km), deviated a mere three yards(2.7 m) from the centre of the target. At a range of 2 miles it hit the centre of the target in two out of five shots. His 12 pounders also achieved impressive accuracy at 5.8 miles.

Again in 1862, Whitworth's guns achieved superior performance to Armstrong's. With a target representing an ironclad warship, made up from a 4.5 inch (114 mm) steel plate backed up by 18 inches (457 mm) of teak and a further 0.625 inch (16 mm) steel plate, at a range of 600 yards (550 m), Whitworth's 131 pound (59.4 kg) projectile passed straight through all three layers and buried itself in the sand. Despite these successes, subsequent production contracts were awarded to Armstrong. As with the rifle, Whitworth was able to find overseas customers for his guns which included both sides in the American Civil War.


Whitworth's biographer, Norman Atkinson, implied that there were questionable procurement practices at the War Office. Whitworth and Armstrong were serious rivals in the gun making business. Both had superior products which had lost out in against inferior competition from government owned factories, the Woolwich Arsenal in the case of Armstrong's big guns and the Enfield Small Arms Factory in the case of Whitworth's rifles. Now the new big gun contracts had gone to Armstrong who had trained as a lawyer and had more friends in high places while Whitworth who had made is way up from the shop floor did not help himself with his aggressive and stubborn attitude and lost out.

By contrast such perverse government decision making, and the resulting wasteful use of resources, did not hamper their international rivals, the Krupp heavy arms industry, who had the constant and unstinting patronage of the German government, once they had established their credentials as an arms supplier, propelling them to the biggest company in Europe at the beginning of the twentieth century.

In 1897, ten years after Whitworth's death, Armstrong purchased Whitworth's company.


Between 1854 and 1878 Whitworth was awarded 20 patents relating to arms production. Guns using polygonal bores are still in production today.


Whitworth was a strong believer in the value of technical education. During his life he founded "Whitworth Scholarships" to advance mechanical engineering and in 1868 donated £128,000 (£10.1 million in today's money) to a similar government scheme. He also backed the Mechanics' Institute, now part of Manchester University.

At the time of his death in 1887, with the exception of Cecil Rhodes, Whitworth was the country's most generous benefactor. He bequeathed much of his fortune to the people of Manchester for public works and a hospital and appointed three legatees, providing them each with over £500,000 (£46 million today) to spend on projects of which they were expected to know he would have approved.


  • Footnote
  • The Crimean War (1853-1856) was The First Modern War and The First Media War.

    The Crimean War saw the first tactical use of modern technology changing the nature and immediacy of war. It included the following:

    • The use of armoured warships and submarine mines.
    • A 7 mile (11 km) long railway to carry supplies from the port of Balaclava to the troops besieging Sevastopol was constructed by the British army.
    • See also the Battle of Inkermam which highlighted the need for improved weapons technology to gain military superiority - still a never ending quest.
    • The electric telegraph, enabled better communications with front line troops but it also enabled the first "live" reporting of the state of battle from remote battlefields, not just to government headquarters, but also through newspapers to the general public.
    • It was also the first European war to be photographed.
    • The gathering, use and pubication of statistics by Florence Nightingale showed that casualties were seven times more likely to die of disease than from their wounds.

    These last three items, for the first time, brought home to the general public the chaos and true horrors of war creating great anxiety and much soul searching.


1851 German inventor Heinrich Daniel Ruhmkorff patents the Ruhmkorff Induction Coil capable of producing sparks 30 centimetres long. Basically a high turns ratio transformer, it was invented in 1836 by Irish priest Nicholas Callan.


1851 French physicist Léon Foucault proved for the first time that the Earth rotated on its axis by suspending a 28 kg brass-coated lead bob pendulum on a 67 meter long wire from the dome of the Panthéon in Paris. The plane of the pendulum's swing, though fixed, appeared to rotate by 360 degrees during the course of the day thus indicating the rotation of the Earth.


In 1852 Foucault performed similar experiments with gyroscopes. Though he was not able to sustain the rotation of the rotor for a full day, he was able to demonstrate that over a short period of time before friction slowed the rotor, the gyroscope maintained a fixed position in space independent of the Earth's rotation.

See more about Gyroscopes.


1851 American inventor and entrepreneur Isaac Merritt Singer invented the Singer Sewing Machine. Like many great inventors his inspiration drew on prior art to which he added his own contributions which brought the commercial success which had eluded his predecessors. In this case he improved the lockstitch mechanism of Elias Howe (See below), making it more reliable. He changed the needle movement from "side to side" to "up and down" enabling the use of a straight, rather than curved needle, and also enabling the machine to sew on a curved path. He also added automatic feed of the cloth and a presser "foot" to hold the cloth down against the upward stroke of the needle, and he introduced the foot treadle to power the movement of the needle and shuttle, replacing the hand-cranked mechanism used in all previous machines.

His major innovation however was in the marketing of the product. Previously sewing machines had been designed for industrial use and Singer launched the first domestic models in 1856 and pioneered the introduction of the Hire Purchase Agreement or Installment Payments, with $5 securing a machine, followed by monthly payments of $3 until the full purchase price was paid off. This allowed people of modest means to acquire relatively expensive capital goods. He later adopted the policy of accepting trade-ins against new purchases. These measures in turn increased the potential market for the machines and allowed the introduction of mass production methods for the first time, reducing the costs and increasing market potential still further. By 1870 the price of a new machine had been reduced to only $30.

Singer expanded into the European market, establishing a factory in Clydebank, near Glasgow, controlled by the parent company, becoming one of the first American-based multinational corporations, with agencies in Paris and Rio de Janeiro.


Singer's machine came towards the end of the Industrial Revolution which had largely benefited the textile industry with the mechanisation of spinning and weaving. His sewing machine dramatically reduced the time to make up garments while simultaneously improving both the quality and strength of the stitching giving further impetus to the textile industry by providing new markets for the increased textile production.

According to Brian Coats of the eponymous thread company, "To put sewing mechanisation into perspective, a skilled seamstress can manage 40 stitches per minute (spm) at full speed. The earliest machines claimed speeds of about 250 spm, Singer's machine in the 1850s could reach 900, and a contemporary domestic machine can do 1,500. Industrial machines will now get up to 10,000 spm and can sew coarse fabrics such as canvas and denim so fast that they will catch fire."

Just as important as the improvement in efficiency however, the sewing machine provided a means for families not just to make their own clothing, but also to start a small family businesses to supplement their incomes and improve their lives.


There had been many attempts at designing sewing machines in the past leading up to, and perhaps influencing, Isaac Singer's design in 1851 but most were unreliable or expensive and failed to gain commercial acceptance. These antecedents included the following:


  • 1755 German inventor Charles Weisenthal awarded an English patent for his invention of a sewing needle for use in a machine, but the description of the machine was not included in the patent, so it is unknown whether he actually designed a machine as well.
  • 1790 English inventor and cabinet maker, Thomas Saint was issued the first patent for a complete machine for sewing. It is not known if Saint actually built a working prototype of his invention. The patent describes an awl that punched a hole in leather and passed a needle through the hole. A later reproduction of Saint's invention based on his patent drawings did not work, though it did with work when some modifications were made.
  • 1810 German hosiery maker, Balthasar Krems developed an automatic machine for sewing caps but did not patent it. Like many others it never functioned well and was forgotten.
  • 1804 A French patent was granted to English inventors Thomas Stone and James Henderson for "a machine that emulated hand sewing."
  • The same year a British patent was granted to Scottish inventor John Duncan for an "embroidery machine with multiple needles."

    Both inventions failed and were soon forgotten by the public.

  • 1814 Austrian tailor, Josef Madersperger was issued a patent for a machine which made embroidery stitches, but it could not sew seams. By 1839 he had also received a patent for a machine suitable for chain stitching but this was not considered to be successful.
  • A chain stitch is formed by a single thread introduced from one side of the material only and is normally used for hemming or temporary stitching. It will unravel rapidly if the last stitch in the chain is not secured.

  • 1818 The first American sewing machine was invented by pastor John Adams Doge and John Knowles. Their machine failed to sew any useful amount of fabric before malfunctioning.
  • 1830 The first practical sewing machine was patented by the French tailor, Barthelemy Thimonnier. His machine had no transport mechanism, with the cloth being moved forward by hand, and used only one thread and a hooked needle that made an acceptable chain stitch like that used in embroidery. He set up a garment factory with 80 of his machines and had contracted with the French army to manufacture their uniforms but he was almost killed by an angry group of French tailors who feared being put out of work by his new invention and burned down his garment factory in 1841. Thimonnier died bankrupt in England.
  • 1833 American Walter Hunt invented the lockstitch sewing machine. In traditional lockstitch sewing, the needle thread interlaces with a separate under-thread, which is on a small bobbin over which the needle thread can pass to lock the stitch in place. This is a much more secure structure than the chain stitch.
  • Hunt's machine had two spools of thread and a curved needle with the eye at the point rather than in the shank as in conventional hand sewing needles. Hunt's needle passed the thread through the fabric in an arc motion; creating a loop on the other side of the fabric and a second thread carried by a shuttle running back and forth on a track passed through the loop creating a lockstitch. It was the first time an inventor moved away from attempting to duplicate hand sewing motions. Unfortunately his machine was only suitable for sewing straight seams.

    He later lost interest in the device because he believed his invention would cause unemployment and never patented it.

    Hunt also invented the safety pin.

  • 1844 The earliest known patent for a sewing machine which used two threads and the combination of an eye-pointed needle and a shuttle to form couched stitches was granted to Englishmen John Fisher and James Gibbons who received the patent for a lace making machine which was almost identical to the machines later made by Howe and Singer. The commercialisation of Fisher's machine was hampered by poor preparation of his patent application and subsequent legal challenges by Howe and Singer.
  • 1846 The first American patent was issued to Elias Howe of Spencer, Massachusetts for "a process that used thread from two different sources". It was basically a refinement of Hunt's idea. With a price tag of $300, equivalent of six months' wages, no single family could afford such a machine and Howe struggled to attract commercial interest in his invention in America. Trying his luck in England he eventually sold his first machine there but ended up in a debtors' prison in 1849.
  • Returning to Massachusetts he discovered that "his" lockstitch mechanism was being copied by many others and he embarked on a series of incessant law suits to protect his design. The most serious offender was Isaac Singer whose machine used the same lockstitch mechanism that Hunt had invented, but which Howe had patented, and in 1854 Howe sued Singer for patent infringement. The courts upheld Howe's patent, since Hunt had abandoned his design and not filed patent, giving Howe the exclusive patent rights to the eye pointed needle and Singer, as well as all others, had to pay royalties to Howe for the use of the patent on every machine manufactured.

    Howe then saw his annual income jump from $300 to more than $200,000 a year.

    In 1856 Howe, Singer and two other sewing machine manufacturers, "Grover & Baker", and "Wheeler & Wilson" agreed to pool their various patents creating the Sewing Machine Combination which extracted royalties of $15 per machine for the use of their patents by others.

    Between 1854 and 1867, Howe earned close to $2 million from his invention. During the Civil War (1861-186), he donated a portion of his wealth to equip an infantry regiment for the Union Army and served in the regiment as a private.

    Elias Howe died in 1867, the year his patent expired.

  • 1849 American John Bachelder from Boston patented a sewing machine with a belt to feed the fabric along a horizontal sewing surface, though his invention was still only capable of making chain stitches. The patent for his feed mechanism was later sold to Singer.
  • 1851 American inventor, Allen B. Wilson, developed the rotary hook shuttle used extensively in lockstitch sewing machines which enabled much faster, vibration-free sewing speeds and the intermittent four-motion feed for advancing the material between stitches which is still used today.

Singer lived an unconventional lifestyle. He ran away from home at the age of eleven to join a travelling stage act and became a consummate showman who put his talents to good use in promoting his machines. He also lived a life of polygamy, marrying his first wife when she was only fifteen and subsequently fathering at least 24 children with seven common law wives and various mistresses.

On the darker side, when his business partner George Zeiber, fell seriously ill and was not expected to survive, Singer persuaded him to sign over his share of the company's assets which were at the time worth around $500,000 for only $6,000. Zeiber had helped Singer to start up the sewing machine company by giving him his entire life savings of $1,700 in return for a full share of the venture and even contributed his own ideas for improvement to the designs. Zeiber recovered however and though he managed to obtain menial employment from the company he never received any offer of compensation for his blatantly immoral treatment.


1852 English chemist Edward Frankland invented the notion of chemical bond and introduced the idea of valency, that an atom of one element could only compound with a definite number of atoms of another element.


1852 Joule and Kelvin (William Thomson) discovered that when a gas is allowed to expand without performing external work, the temperature of the gas falls. Now known as the Joule-Thomson Effect, it is the basis of nearly all modern refrigerators and gas liquefaction processes. (The Peltier Effect is also used in some special cooling applications)

For an explanation see Refrigeration Systems in the section on Heat Engines.


1852 American engineers William F. Channing and Moses Gerrish Farmer installed the first municipal electric fire alarm system using a series of electric bells and call boxes with automatic signaling to indicate the location of a fire in Boston, twenty four years before the advent of the telephone.


Farmer was a prolific inventor in the same mould as Edison. In the same year (1852) he also demonstrated diplex telegraphy, the simultaneous transmission of two signals in the same direction down a wire (or channel), the first example of time division multiplexing (TDM). It was based on two rotating switches, one at each end of the line which connected the transmission line alternately to each transmitter / receiver pair permitting sequential, interleaving of signals from each channel. Unfortunately he was not able to develop it into a practical system because of the difficulty of synchronising the receivers with the transmitters, a problem which was not solved until 1874 by Baudot.

In 1858 he did however patent a two battery duplex system similar to Gintl's 1853 design. (See next). As with the diplexer, there were obstacles to overcome before practical duplexers were ready for roll out. In this case it was the design by Stearns in 1872 which took the honours.


In 1853 Farmer also patented an improved battery.


1853 Austrian telecommunications engineer Julius Wilhelm Gintl working in Vienna, invented a method of duplex telegraphy, the simultaneous transmission of two signals in opposite directions down a wire (or channel). The first telecommunications duplexer - allowing simultaneous message transmission and reception. It was a two battery, "compensating" system with differential relays, in which two samples of the transmitted signal were arranged to cancel eachother in the local receiving relay but were able to operate the remote receiving relay normally.

In 1855 German engineer Carl Frischen working for Siemens & Halske registered of a patent for a simplified version of Gintl's design with only one compensating battery.


1853 The electric burglar alarm patented by American Minister Augustus Russell Pope. When a door or window was opened, it closed an electrical contact initiating an alarm. The rights to the patent were purchased by Edwin Holmes who began manufacturing and selling the alarms in 1858 and was subsequently credited with its invention.


1853 Almost 200 years after Newton, Scottish engineer William John Macquorn Rankine introduced the concept of potential energy for stored energy (In mechanical terms - energy based on position). Together with Kelvin they applied the concept to electrical potential whose unit of measurement they named the volt.


1853 Mathematical representation of the voltage-current relationships of capacitors (i = C dv/dt) and inductors (v = L di/dt) derived by Kelvin enabling the analysis of RLC circuits and the performance of telegraph cables. A more detailed model of the cable or transmission line, based on Kelvin's theory, but taking into account the distribution of the capacitance and inductance along the line, was developed by Kirchhoff in 1857.


1854 The fundamental idea of the electrical transmission of sound (the telephone) was published in the magazine "L'Illustration de Paris" by Belgian experimenter Charles Bourseul, working in France.


1854 Heinrich Geissler, a master glassblower in Bonn, Germany, was the first to make use of improved vacuum technology to create a series of astonishingly beautiful evacuated glass vessels into which he sealed metal electrodes. Geissler's vacuum tubes emitted brilliant and colourful fluorescent light when energised by a high voltage which aroused the interest of both scientists and artists of his day.


1854 English mathematician George Boole published "An Investigation of the Laws of Thought, on Which Are Founded the Mathematical Theories of Logic and Probabilities" in which he expressed logical statements in mathematical form. Now known as Boolean Logic it also used a binary approach to represent whether statements were "True" or "False". Starting with statements A, B or C etc. which are either true or false, (With binary or two valued logic they can't be anything in between. "Maybe" or "sometimes" are not acceptable.), other statements which are true or false, can be derived by combining the initial statements together using the fundamental logic operators AND, OR and NOT.

A simple example with two propositions A and B

A "Ford makes cars" is true.

B "Ford sells hamburgers" is false.

Using Boolean logic we can make the following more complex statements which are also correct.

A AND B is false.

A AND NOT B is true.

A OR B is true.

It seemed trivial at first and Boole's symbolic logic made little impact at the time until twelve years later it was picked up and developed by American logician Charles Sanders Pierce. However Boolean logic still remained in obscurity until it's value was eventually recognised by Claude Shannon in 1937 and used to make improvements to Vannevar Bush's analogue computer the differential analyser. Overnight Boolean algebra became a basic information processing concept now used in all modern digital computers.

See Boolean Logic and Digital Circuits.


Boole's wife, Mary Everest, niece of Sir George Everest after whom the mountain was named, was not blessed with the same logical mind as her husband. In 1864 at the age of 49 Boole caught a serious cold after walking two miles in the rain and giving a lecture still dressed in his wet clothes. His wife believed that a remedy should resemble the cause. She put him to bed and threw buckets of water over the bed since his illness had been caused by getting wet. Boole died of pneumonia.


1854 Irish inventor John Tyndall, in a demonstration at the Royal Institution, directed a beam of sunlight into the path of the curved stream of water pouring from a container. Due to total internal reflection at the boundaries of the water stream with the air, the light followed a zig zag path inside the arc of the water stream which acted as a light pipe. This is the phenomenon on which fibre-optics are based today.

Tyndall was a prolific inventor as well as a renowned populariser of science in the mould of Michael Faraday whom he counted among his friends.


Experimenting with cures for insomnia, he died at the age of 73 from an overdose of chloral, a sedative administered by his wife.


1854 Scottish chemist John Stenhouse invented the gas mask. It was based on the ability of powdered charcoal to absorb large volumes of gases. Carbon based absorbers are still the most common filters in use today


1854 Italian priest and engineer Eugenio Barsanti in partnership with hydraulic engineer Felice Matteucci patented a four stroke, spark ignition internal combustion engine running on coal gas. They failed to sufficiently promote their business and when Barsanti died at the age of 43 in 1864 Matteucci was unable to carry on alone and Otto's recent (1862) similar design became the industry standard.


1855 British chemist and inventor Alexander Parkes produced the first synthetic (man made) plastic. By dissolving cellulose nitrate in alcohol and camphor containing ether, he produced a hard solid which could be molded when heated, which he called Parkesine (later known as celluloid). Unfortunately, Parkes could find no market for the material. In the 1860s, John Wesley Hyatt, an American chemist, rediscovered celluloid and marketed it successfully as a replacement for ivory. Thus was born the plastics industry which brought new opportunities to the electrical industry for both insulation and packaging.


1855 During the Crimean War (1853-1856), in response to military demands for large quantities of heavy guns and stronger metals to make them, English engineer and inventor, Henry Bessemer developed and patented a more effective, fast and inexpensive method of mass-producing steel from pig Iron simply by reducing its Carbon content. (See more about how the properties of Iron and steel are determined by their carbon content). He devised a way of purifying the iron by blowing cold air through the molten metal to oxidise and separate out the impurities, which included Silicon, Manganese as well as the Carbon, thus converting the high carbon pig iron into low carbon steel. The silicon and manganese oxides were removed as slag and the Carbon monoxide burned off into CO2.


The chemistry governing the properties of steel was not well understood at the time and several industrial chemists and foundrymen had been working for some time on similar processes including James Nasmyth in England and William Kelly in the United States as well as others working independently. Kelly's explanation of the process was that the Carbon content of the iron was burned out by blowing air through the molten Iron. He claimed to have built a pilot plant but Bessemer was the first to build a full scale practical converter for which he submitted a British patent application in 1855 which was granted in 1856. On hearing of Bessemer's subsequent U.S. patent application in 1856 which was also granted, Kelly belatedly challenged this and applied for a U.S. patent himself for the basic chemical process which was granted in 1857. Kelly was however declared bankrupt the same year and was forced to sell his patent.


Bessemer's patent concerned the practical system in which the decarbonising process took place in a 20 feet (6 m) tall, egg shaped, tilting steel retort or furnace lined with refractory material and known as a Bessemer converter. Molten pig iron was fed into the retort from the top and air was blown in from the bottom. The process itself did not use any fuel. The oxidation reactions were exothermic and kept the temperature up and the iron molten. On completion of the conversion the retort was tilted to pour out the molten steel into moulds.

The initial process was successful in removing the impurities from the iron but it also removed too much of the carbon, the amount of which controlled the properties of the steel, and it left excess Oxygen in the steel leaving it too soft to be useful.

The problem was solved the following year by metallurgist Robert Mushet who came to the rescue with a solution for managing the Carbon content of the steel and in so doing ensured the economic viability of Bessemer's converter.

Further innovations allowing the use of cheaper and poorer quality Iron ores were introduced by Gilchrist Thomas in 1876.


Bessemer's steel was much stronger than wrought Iron and cast iron whose serious weaknesses had been exposed in some construction projects. It was also less expensive than wrought Iron which it rapidly replaced.

Bessemer's converter, together with the innovations introduced by Muchet and Thomas, reduced the costs of steelmaking by about 80% but just as importantly it enabled the large scale production of steel. Previously steel had been made artisans in small quantities in crucibles and involved much highly skilled manual labour. Steel was expensive and its use was mainly limited to small, high cost products such as cutlery, scissors, hand tools, swords and small arms. Large metal structures were made of wrought or cast iron. The availability of mass produced, low cost, high quality, bulk steel in large pieces opened the door to a host of new applications for steel in railways, construction, ship building, heavy armaments, cable making and high pressure boilers and had a major impact on industrial development in the nineteenth century.


Bessemer was a prolific inventor with at least 129 patents to his name and made his first fortune selling "Gold" paint, enabling passable imitations of the very expensive ormolu to be made. He made it from fine powdered brass suspended in a paint like solution. Rather than patenting it, he kept the process a closely guarded secret, carrying out parts of the production in four separate locations so that nobody could know the complete process. Bessemer's Gold paint was used to adorn much of the gilded decoration which was popular at the time, and brought him great wealth.


See also Iron and Steel Making.


1856 British metallurgist Robert Forester Mushet found an inexpensive way of providing more precise control of the Carbon content of Bessemer steel. He recognised that Bessemer's steel was "over oxidised" and by adding small, controlled quantities of ferro-Manganese, or spiegeleisen (German Spiegel - mirror and Eisen - Iron) to the mix, this could be reversed. Spiegeleisen is an alloy of Iron containing approximately 15% manganese and small quantities of Carbon and Silicon and when added to the furnace charge, the Carbon in the spiegeleisen replaced a controlled amount of the Carbon lost in the Bessemer conversion and the surplus Manganese and Silicon were oxidised by the Oxygen supply and removed as slag. Mushet's innovation restored the strength to Bessemer's steel, making it suitable for rolling, forging and high temperature working.


The following year Mushet developed Tungsten steel, the first commercial steel alloy. by adding about (8%) of Tungsten to molten steel in a crucible. When forged at a low red heat, and allowed to cool gradually, the steel is naturally hard (so called self-hardening) and suitable for use as a tool steel. It maintains its edge and can cut much harder metals at much higher speeds than had previously been possible. Until then, the only way to produce hard tool steel had been to heat high Carbon steel to a vey high temperature and to quench it quickly in cold water. Steel hardened in this way lost its hardness if it was overheated during use. Tungsten steel revolutionised the machine tools industry and industrial metalworking.

Muchet went on to develop and manufacture other iron and steel alloys with Chromium and Titanium.


Also in 1857 Mushet was the first to make durable rails for the railways from steel rather than the more brittle cast Iron which had been used until then. Steel rails were also less costly to produce and were soon adopted worldwide.


Like many prolific inventors he was not a good businessman and never made any money from his inventions. He was however paid a small pension by Bessemer in recognition of his invaluable contibution to the converter process without which Bessemer's design would not have been viable.


1856 The Dean of Science at the University of Lille, French chemist and microbiologist Louis Pasteur, was requested by Bigo a local industrialist, who produced alcohol from fermented beet juice, to investigate why his product was becoming contaminated and sour, a problem which was also experienced by other local alcohol manufacturers. Pasteur was aware that twenty years earlier another chemist, Charles Cagniard de la Tour, after examining fermentation products under a microscope, claimed that the yeast involved in fermentation was a living organism. However, established chemists at the time ridiculed his theory, believing instead that fermentation was purely a chemical process with some of the contents of the mix acting as catalysts.

Pasteur took samples from Bigo's vats producing good alcohol and also from vats whose alcohol output was spoilt and sour, to examine them under a microscope in his own laboratory. In the healthy samples he observed growing yeast cells sprouting little buds, while in the sour, grey samples from the deficient vats there were no globules of growing yeast, but instead there were shimmering individual rod-shaped organisms. He concluded that the yeast involved in alcoholic fermentation was indeed a living organism which fed on beet juice and that alcohol was the end product of the yeast's metabolic processes. He guessed that the tiny vibrating rods in the grey samples were preventing the yeast from growing and that the end product of this unwanted process was lactic acid. He observed that a similar type of organism could be seen in sour milk and guessed that different organisms could produce different end products. He published his results in 1858.

The notion that a drinkable product such as alcohol was the waste discharged from the digestion system of a small creature was a terrible heresy at the time and the subject of ridicule by establishment chemists. Nevertheless, Pasteur's theories became the basis of microbiology.


In 1857 Pasteur took up an appointment as Director of Administration and Scientific Studies at his old school, the École Normale Supérieure, France's premier educational establishment in Paris where he developed an interest in the wider study of the origins of life and the continuing scientific debate about spontaneous generation.

At the time, conventional wisdom said that life could generate spontaneously from non-living material. This was deduced from experiences such as the occurence of maggots which seem to appear from nowhere on rotting meat. Likewise infectious diseases were thought to be due to miasma or bad air (See 1849 John Snow and the Broad Street Pump). These opinions arose because the infections were due to microbes which are not visible to the naked eye but only through a microscope and although the microscope had been invented by van Leeuwenhoek in 1668, there were currently still very few in use in scientific labs.

Pasteur did not believe these opinions and, like his contemporary Snow, he theorised that the infections were due to the presence of microbes which he had previously observed in his trusty microscope. As an initial test to verify his theory he filtered air through a cotton filter and on examining the cotton from the used filter under the microscope, he found that it contained similar types of microorganisms to those found in decaying food.


In !859 he devised a more definitive experiment to prove that many infectious diseases were not caused by "spontaneous generation" from inanimate matter but by "microorganisms" otherwise known as "germs". Using a beef broth which was known to be prone to putrefying due to contamination by bacteria, he first boiled the broth in a swan-necked flask to sterilise it by destroying any existing life in the sample. The flask was designed to allow the passage of air in and out of the sample, but to prevent any bacteria or microbes from entering the flask to contaminate it. (See illustration of Pasteur's Experiment). This was made possible by the flask's very long and thin, S-shaped neck and the help of gravity and moisture on its inner surface, which filtered the air, allowing its passage through the tube, but trapped any dust, spores or other particles which it contained, in the tube's narrow bends preventing them from reaching the broth.

As a result, the sterile (boiled) broth in the flask itself remained clear and sterile for up to several months so as long as it did not contact the contaminated dross in the neck of the tube. However, if the neck of the flask was broken off some time after the boiling of the broth, and the broth was reexposed directly to unfiltered air, it would quickly become clouded or mouldy, indicating microbial contamination. Tilting the flask so that the broth came into contact with the accumulated dross would likewise initiate putrification.

Thus he proved conclusively that that the exposure of a broth to air was not introducing a "life force" to the broth and that any putrification or the growth of a mould was entirely due to "airborne microorganisms" present in the air, at the same time disproving abiogenesis (That life could generate spontaneously from inanimate matter).


In 1861 Pasteur published his results for which he was awarded a prize of 2500 francs from the French Academy of Sciences the following year.

This study was the basis of the Germ Theory of Disease. The realisation that microorganisms cause disease and can spread through the air or by direc, contact revolutionised medicine and is still valid today.


In 1863 the French wine industry, which constituted a major contributor to the French economy, was in dire straights as a large percentage of the wine produced by vintners in many parts of the country was increasingly turning out to be diseased and sour. The government was so concerned that Napoleon III himself contacted Pasteur and asked him to help. Rather than depending on the legendary palates of the wine trade's sommeliers and connoisseurs, Pasteur examined the wine under his microscope and showed them that the wine which had "turned" contained what he called "parasites" while the good wine was clear and free from such parasites. The simple conclusion was that, after fermentation, potentially harmful organisms remained growing and reproducing in the wine and should be eliminated. His solution, which he patented in 1865, was to heat the wine briefly to 50 - 60°C (122 - 140°F) after fermentation just enough to kill off any remaining organisms, but not enough to destroy the wine's flavour, so that it would not go sour as it aged. Vintners were still sceptical, but out of necessity they tried it and it worked and the process became known as Pasteurisation in his honour and it was soon adopted for preserving milk and beer.


In 1865 after his help with the wine industry the French government asked Pasteur if he could cure a disease which was destroying silkworms. He discovered that the disease was caused by two different types of parasitic microbes which attack silkworm eggs. He was able to isolate infected silkworms from healthy ones thus preventing further contamination.


Following in Jenner's (1796) footsteps (and methods), injecting his subjects with preparations containing attenuated forms of the bacillus that causes the disease, Pasteur developed vaccines to protect against several diseases including:

  • Chicken Cholera in 1879
  • Animal Anthrax in 1881
  • Human and Animal Rabies in 1885

Pasteur was one of the world's greatest scientists, renowned for his revelations based on meticulous observations. He described his philosophy thus "In the field of observation chance favours only the prepared minds", counsel that has guided many others since then.


1856 As an extension to his "dynamical theory of heat" published in 1851, Kelvin submitted a paper to the Royal Society outlining the "dynamical theory of electricity and magnetism" treating electricity as a fluid. It was these ideas which led Maxwell to develop his theory of electromagnetic radiation published in 1873.


In the same year Kelvin invented the strain gauge based on his discovery that the resistance of a wire increases with increasing strain.


1857 Following his discovery the previous year that the resistance of a conductor increases with increasing strain Kelvin also discovered that the resistance also changes when the conductor is subjected to an external magnetic field, a phenomenon known as magnetoresistance. In bulk ferromagnetic conductors, the main contributor to the magnetoresistance is the anisotropic magnetoresistance (AMR). It is now known that this is due to electron spin-orbit interaction which leads to a different electrical resistivity for a current direction parallel or perpendicular to the direction of magnetisation. When a magnetic field is applied, randomly oriented magnetic domains tend to align their magnetisation along the direction of the field, giving rise to a resistance change of the order of a few percent. The AMR effect has been used for making magnetic sensors and read-out heads for magnetic disks. See also GMR


1857 Wheatstone introduced the first application of punched paper tapes (Ticker tapes) as a medium for the preparation, storage, and transmission of data (another one of Bain's ideas) which was rapidly adopted in the USA to speed up the transmission of Morse code.


1858 The laying of the first Transatlantic Telegraph Cable from two wooden warships, Agamemnon and Niagara, was completed. - One of the greatest engineering feats of the nineteenth century. Financed by American entrepreneur Cyrus Field, it was designed and supervised by arrogant and incompetent amateur electrician, the aptly named, Dr Edward Orange Wildman Whitehouse, a former surgeon from Brighton. The cable was made up from seven copper strands carrying the signal, insulated with a treble layer of gutta percha held together by a jute yarn impregnated with tar, pitch, boiled oil and beeswax. The protective armouring consisted of eighteen strands of seven wires each of charcoal iron bright wire and the total weight of the cable was one ton per mile. The signal was carried in one direction by the cable while the return path was through the Earth.

Unfortunately the cable failed after less than a month in use, almost before the celebrations were complete, having transmitted only 732 messages. At its peak, over a period of 20 days, the cable was able to transmit in one direction 271 messages with an average of 10 words each, and 129 messages in the other direction, but the transmissions got steadily weaker with the message pulses becoming lost in the noise until it was taking half a day or more to send a message.

The signal pulses were generated from Daniell cells whose voltage was augmented using induction coils. In an attempt to solve the problem of weak signal levels, Whitehouse, advised by Morse it is claimed, increased the battery voltage from 600 Volts to 2000 Volts with disastrous results causing the breakdown of the cable's insulation. Kelvin, a consultant on the project, had advocated solving the weak signal problem by using more sensitive receiving equipment. He had the same year patented a mirror galvanometer (originally devised by Poggendorff in 1826) which enabled the detection of very weak signals for this purpose which arch rival Whitehouse was reluctant to use, preferring his own detectors. Kelvin's work on this high profile project and his design and management of the subsequent successful cable, laid by Brunel's Great Eastern and the Archimedes, in 1866 enhanced both his reputation and his bank balance as well as his already considerable ego.


One of the last messages sent over the original cable before it failed was from the British government to General Trollope, commander of the British forces in Halifax, Nova Scotia, rescinding an order to send two regiments of troops to help quell the Indian Mutiny, a rebellion against British rule. The original order had been sent by ship, the fastest way possible, a few weeks earlier, but by now the rebellion had been contained and there was no need for reinforcements. This single message of only nine words saved the British government £;60,000 - more than paying back its investment in the cable.


The Archimedes, used to lay the second cable, was the world's first propeller driven steam ship and was named appropriately after the Archimedes Screw. Built in 1839 by Henry Wimshurst, father of the inventor of the Wimshurst electrostatic generator, it was sixteen years before the launch of the similarly equipped Niagara which had been used to lay the first cable.


It was not until 1956, almost a hundred years after the original cable was laid, that the Atlantic was spanned by the first telephone cable TAT 1.


1858 German physicist Julius Plücker at Bonn University, looking for a way to observe "pure electricity" separate from the conductor carrying it, discovered cathode rays. Aware of Hauksbee's glow discharge demonstrations in 1705, he commissioned local glassblower Heinrich Geissler to construct an evacuated tube with a metal plate or electrode at each end. Plücker and his assistant Johann Hittorf evacuated the tubes using Geissler's "mercury air pump", which produced a much greater vacuum than Hauksbee had been able to achieve. They created an electric discharge between the electrodes and observed what happened in the intervening empty space. At first, with partial vacuum, the tube was filled with an eerie glow just as Hauksbee had found but as the vacuum was increased the glow disappeared and a different greenish glow appeared on the glass near one of the electrodes. Hittorf showed that the glow was due to invisible rays which he called glow rays (now called cathode rays) which were emanating from the other electrode. He noticed that they cast shadows when objects were placed in their way indicating that they travelled in straight lines and that they were deflected by magnets indicating that they were electrically charged.

On further investigation Plücker filled the tube with different rarified gases to observe how they conducted electricity and discovered that each gas glowed with a bright characteristic colour like modern day fluorescent lights, years before their time. Although this amazing nineteenth century invention was picked up by local shopkeepers to entertain their customers it was never commercialised and seems to have been forgotten until it was rediscovered by Claude in the twentieth century.


1858 Scottish linguist and chemist Archibald Scott Couper and German chemist Friedrich August Kekulé von Stradonitz of Czech decent simultaneously and independently recognized that carbon atoms can link to each other to form chains giving birth to the study of organic chemistry. Prior to this thinking, it was believed that molecules could only have one central atom. Couper's publication was delayed for three weeks by his reviewer Charles Adolphe Wurtz and all credit for the discovery went to Kekulé. Couper was devastated and never published another paper.


1858 The electric burglar alarm, invented five years earlier by Augustus Russell Pope, was first commercialised by American inventor Edwin Holmes who is usually credited with its invention. Holmes' workshop was later used by Bell in the development of the telephone and he was the first person to have a home telephone. Holmes' Burglar Alarm business was eventually bought by the American Telephone and Telegraph Company (AT&T) in 1905.


1858 Italian chemist Stanislao Cannizzaro, using Avogadro's theories, resolved the confusion between atoms and molecules of the compounds of the same atoms allowing a unified scale for relative atomic mass of the elements to be developed.


1859 Scottish engineer and polymath William John Macquorn Rankine published his "Manual of the Steam Engine and Other Prime Movers" in which he provided a systematic treatment of the theory of steam engines. Building on Carnot's theory on the efficiency of heat engines which was based on the thermodynamic cycle of a single gaseous phase reversible process, he recognised that the relationship does not apply if a phase change is encountered, because the heat added or removed during a phase change does not change the temperature of the working fluid. He therefore developed a more general theory of heat cycles for vapour based, closed systems in which the working fluid was alternately vaporised and condensed. Now known as the Rankine Cycle, it describes the steam cycle used in modern day electricity generating plants.

See also Heat engines.


1859 French inventor Ferdinand Carré developed the first gas absorption refrigeration system using gaseous ammonia which he patented in 1860. The system does not depend on a compressor and instead uses heat to change the vapour back to a liquid. Due to the toxicity of ammonia they were mainly used for the commercial production of ice rather than for domestic applications. Since gas absorption systems with no moving parts can be built, they are still used today portable applications where no electricity supply is available.

For an explanation of how heat is used for cooling see Refrigeration Systems in the section on Heat Engines.


1859 English naturalist Charles Robert Darwin published On the Origin of Species explaining his Theory of Evolution that all species are descended from common ancestors by Natural Selection. This theory became more commonly known as The Survival of the Fittest. There had been many similar speculations in the past but coming from a respected scientist and justified by evidence, though not yet possible by experiment, it created widespread interest as well as controversy since its findings directly contradicted the Creationist Theory as found in the bible and traditionally held by the church.

It was the culmination of many years' work by Darwin. As a self funded naturalist with the aim of collecting specimens, in 1831 he accompanied captain Robert FitzRoy's expedition to chart the coastline of South America sailing on HMS Beagle as a passenger. The expedition was planned to take two years but lasted almost five years during which Darwin was seasick most of the time while at sea. In South America he gathered fossils of extinct species and discovered later that they were allied to other species still living on the same continent. He noticed that finches present on three of the Galapagos Islands represented three separate species each unique to that particular island, and speculated that "one species does change into another" by the genealogical branching of a single evolutionary tree. It was not until over twenty years later after much further research that Darwin eventually published his theories.


1860 Spurred by the threat of the Civil War, entrepreneurs William H. Russell, William B. Waddell and Alexander Majors launched the Pony Express mail service to bring faster communications to the American West. It consisted of relays of men riding horses carrying saddlebags of mail across a 2000 mile trail between St. Joseph, Missouri, and Sacramento, California. The journey took between ten and twelve days with the pony riders covering around 250 miles in a 24-hour day. Soon the Pony Express had more than 100 stations, 80 riders, and between 400 and 500 horses, becoming part of the legend of the Old West. Sadly, despite its fame, the service lasted only 19 months when the completion of the Pacific Telegraph line in October 1861 rendered its service obsolete and its investors bankrupt.


1860 Belgian engineer Jean Joseph Étienne Lenoir patented the first practical internal combustion engine, a single-cylinder, two-stroke engine which burnt a mixture of coal gas and air. It was a double acting configuration with the power stroke and exhaust stroke taking place simultaneously on opposite sides of the piston. The fuel/air charge was not compressed before ignition which was provided by a spark from a Ruhmkorff coil. His patent also included the provision of a carburettor so that liquid fuel could be substituted for gas. The thermodynamic cycle on which the engine was based is named the Lenoir cycle after him.

Lenoir went on to build an experimental vehicle driven by his gas-engine, which managed to achieve a speed of 3 kms/hour in 1862.


1860 Munich clockmaker Christian Reithmann was granted a patent for a four stroke internal combustion engine, but lost out to Otto in subsequent legal patent disputes. He is also reputed to be the first person to use Hydrogen to power an internal combustion engine.


1860 The Lead Acid battery, the first practical rechargeable storage battery was demonstrated by Raymond Gaston Planté. It used spiral wound electrodes of Lead and Lead Oxide immersed in Sulphuric Acid and despite delivering remarkably high currents it remained a laboratory curiosity for two decades until the manufacturability and performance were improved by Fauré. The reversible battery cell chemistry had been observed 60 years earlier by Gautherot using Copper electrodes but he failed to realise the potential of his discovery. (Sorry!) After over 145 years of development, patents are still being awarded for improvements to this simple device. Currently the value of Lead Acid batteries sold every year in the world is over $30 Billion and still growing.

1860 Concerned with the security of coal supplies, French mathematician Auguste Mouchout started work on the design of a solar powered motor, the first practical application of solar energy. The following year he was granted a patent for his design which used sunlight to boil water in a solar boiler to raise steam to drive a conventional motor. By 1865 his efficiency improvements included solar collectors or reflectors to catch and focus more of the sun's energy and also a tracking device to maintain the optimum orientation towards the sun.


1860 Scottish physicist James Clerk Maxwell showed that white light can be generated by mixing only three colours not the full spectrum as indicated by Newton.


The following year he published "On the theory of primary colours" in which he explained that any colour, not just white light, can be generated with a mixture of any three primary colours. He chose red, green and blue and produced the world's first colour photograph at a demonstration of colour photography to the Royal Institution in London in 1861. The subject was a tartan ribbon. Three separate monochrome images were made by exposing the ribbon through red green and blue filters respectively to make three lantern slides. A colour image of the ribbon was then created by projecting the three images from the slides simultaneously on to a screen through three separate lanterns, each equipped with the same filter used to make its image. See Maxwell's Colour Photograph.


Maxwell also developed the colour triangle, a practical tool for generating any desired colour. The vertices of the triangle represent the primary colours and the proportions of each primary colour required to generate the desired colour are determined by the distance of the desired colour from each vertex.


Maxwell's work could be considered to be the basis for modern colorimetry. Colour television and HTML, the language used to generate the colours in Internet browsers, work on the principle of combining different proportions of red, green and blue primary colours (RGB) to produce the full spectrum of colours as proposed by Maxwell.


1861 German schoolmaster Johann Philipp Reis made the first public presentation of a working telephone to Frankfurt's Physics Association (Der Physikalische Verein) and published "Telephony Using Galvanic Current". His transmitter and receiver used a cork, a knitting needle, a sausage skin, and a piece of platinum. Initially fifty units were made but their performance was erratic. Unfortunately Reis suffered from tuberculosis and did not have the time nor the energy to perfect his invention which he called the "Telephon", nor did he find the time to patent it. He died at the age of 40.


1861 Italian immigrant to the USA, fugitive from persecution as a supporter of the Italian unification movement, Antonio Santi Giuseppe Meucci, after constructing numerous devices which enabled the transmission of sound, demonstrated a working telephone system in New York. It was based on a system he had devised for communicating between his bedridden wife's room and his workshop in the basement. He called it the Telettrofono and it was reported in the local Italian language newspaper "L'Eco d'Italia" at the time.


Meucci was perpetually short of cash. He was a prolific inventor but was unsuccessful in commercialising his ideas and this consumed most of his income. Nevertheless he also provided financial support to the leader of the Italian unification movement Giuseppe Garibaldi during his exile in the United States.


Meucci continued to devise improvements to his telephone system, including inductive loading (in 1870) to enable longer distance calls. Unfortunately, in 1871 when he was incapacitated with serious burns from an explosion aboard the steamship Westfield on which he was travelling, his wife sold all his early models of telephone devices for $6. Meucci could not afford the $250 needed to patent his system, however in 1871 he did manage to obtain a cheaper official "Caveat" stating his paternity of the invention. After the sale of the old prototypes, in 1874 he handed some new models to Western Union Telegraph for evaluation and these were subsequently seen by Alexander G.Bell who had access to the laboratory where they were stored. In 1876 he was surprised to read in the newspapers that Bell was credited as the sole inventor of of this amazing new device. United States Patent No. 174,465, issued to Alexander Graham Bell in 1876, became recognized as the world's "most valuable patent." Meanwhile Meucci died in poverty in 1989 bringing to an end the US Government's fraud proceedings against Bell.


Meucci was finally recognised as the first inventor of the telephone by the United States Congress in its resolution 269 dated June 15, 2002, 113 years after his death.


1861 French engineer Alphonse Beau de Rochas patented the four stroke cycle the principle on which most modern internal combustion engines depend though he never built an engine.


1862 German travelling salesman and inventor Nicolaus August Otto demonstrated the World's first successful four-stroke, spark ignition, internal combustion engine. Prior to that, three patents for four stroke engines had been awarded, the first to Italian inventors Eugenio Barsanti and Felice Matteucci in London in 1854, the second to German engineer Christian Reithmann in 1860 and the third to French engineer Alphonse Eugène Beau de Rochas in 1861 however none of these engines achieved commercial application and there is no evidence that Otto was aware of these developments. In 1864 with Eugen Langen the owner of a sugar factory, Otto established N.A. Otto & Cie. (today's DEUTZ AG) to manufacture the engines. Initially they made only stationary engines but today the Otto cycle, named after him, is the operating principle used by the vast majority of the world's piston engines.

See also Heat engines.


1863 The British government passes the Alkali Works Act setting limits to the emissions of noxious substances, one of the first attempts to recognise and control environmental pollution. Alkali compounds were widely used at the time in the production of glass, soap, and textiles and were manufactured using the Le Blanc Process whose byproducts included various harmful emissions including hydrochloric acid, nitrous oxides, sulphur and chlorine gas. As a result, manufacturing plants were ringed by dead and dying vegetation and scorched earth and local residents suffered health problems. The new law was backed by the appointment of Alkali Inspectors who monitored pollution levels.

One of the founders of modern chemical engineering was George E. Davis who started his career as an "Alkali Inspector". He stressed the value of large scale experimentation (the precursor of the modern pilot plant), safety practices, and a unit operations approach for controlling chemical manufacturing processes.


1863 Ányos Jedlik, then physics professor at the University of Pest in Hungary, introduced his multiplying capacitor battery in which a bank of electrostatic generators was used to simultaneously charge a parallel bank (battery) of capacitors. The charged capacitors were then switched to a series connection so that the voltage appearing on the output terminals was equal to the sum of the voltages on the individual capacitors, enabling very high voltages to be built up. He was awarded a gold medal at the 1873 Vienna World Exhibition for his design.


1863 English geologist, Henry Clifton Sorby, developed techniques for studying the microscopical structure of iron and steel by polishing its surface and etching it with acid so that the structure could be observed with a with a microscope. In this way he discovered how the strength of the steel is determined by small but precise quantities of carbon in its content. After 4,000 years of empirical studies on steel making, Sorby's metallographic techniques at last provided a much needed tool for steel makers to understand, contol and improve the properties of their steel.


See also Iron and Steel Making.


1864 Maxwell predicts that light, radiant heat, and "other radiations if any" are electromagnetic disturbances in the form of waves propagated through an electromagnetic field according to electromagnetic laws. It was not until 1873 that Maxwell provided the theoretical justification for his predictions.


1864 James Elkington the owner of a silver plating works in Birmingham, invented a commercial method for the refining of crude copper by the electrolytic deposition pure copper from a solution of copper salts. He patented the idea the following year and 1869 he founded the first electrolytic refining plant using this process, at Pembrey in South Wales.


1865 Gregor Mendel, an Austrian monk who had initially trained in mathematics and philosophy before entering the priesthood, outlined the Laws of Ineritance after experimenting with pea plants which he chose for convenience because of the relatively short time between their generations. In a series of carefully controlled experiments, he monitored some of their seven unique characterstics or traits, which included flower colour, position of flowers, seed colour, seed shape, pod colour, pod shape and plant height, over several generations. The experiments included cross pollination between different examples or 'forms' of the same characteristic such as crossing pure stock of white flowered plants with similar stock of purple flowered plants. He observed that characteristics of the next generation's offspring were not a variable mixture of the properties of the original pair. In the case of flower colour he noted that the colour of the individual new flowers was not a random blend somewhere between white and purple, but instead the colours of the offspring were either pure white or pure purple in fixed proportions. He deduced that for each trait there were two possible 'forms' now called alleles (e.g. plant height which may be tall or short, flower colour which may be white or purple) which are passed on in pairs.

In the case of the pea plant flowers, the ratio of purple to white flowers was 3 to1 and this proportion remained the same for subsequent generations. For each characteristic, he named the particular trait found in the greatest proportion as the dominant variety and the other form he called the regressive variety.

He also noticed that the characteristiics were independent from eachother so that the flower colours had no relationship with other characteristics such as plant height and its alleles (tall or short) and that each factor could be expressed independently.

Influenced by his early education, Mendel looked for a mathematical explanation of his findings and from his knowledge of probability and statistics he deduced that there was a fixed relationship between generations and the independence of the individual characteristics and found that inheritance must also be governed by fixed, discrete rules and that the rules must be passed between generations by discrete physical particles or 'units. These discrete factors are now called genes.

Mendel's Laws

  • The Law of Independenr Assortment: All of the pairs of alleles split up separately so that any mix of alleles from the parent is possible in the gamete (the male and female reproductive cells, sperm and eggs, of the parents).
  • The Law of Dominance: If an organism has an allele for a dominant trait, then this is the trait that will be expressed.

His paper "Experiments with plant hybridisation" was published in 1866 and though the importance of his work was not recognised at the time, he was later acknowledged to be the Father of Genetics.


1865 Clausius introduces the concept of entropy (from the Greek "transformation") defined as: "The internal energy of a system that cannot be converted to mechanical work" or "The property that describes the disorder of a system". He restated the Second Law of Thermodynamics, first outlined by Kelvin, in the context of system entropy as "In a closed system the entropy can only increase".


1865 French engineer Pierre-Émile Martin took out a license from German engineer, Karl Wilhelm Siemens and developed the open-hearth process in an attempt to circumvent the Bessemer patents. This process converts Iron into steel in a broad, shallow, gas fired open-hearth furnace, by adding scrap Iron including wrought Iron or Iron oxide as well as the alkaline limestone to molten pig Iron until the carbon content is reduced by dilution and oxidation. The process allows for the production of larger batches of steel than the Bessemer process. It also allows precise control of the specifications of the steel but it is very slow.


See also Iron and Steel Making.


1865 The International Telecommunications Union (ITU), the world's oldest international organisation, an example of international cooperation at its best, was established to develop a framework agreement covering the interconnection of the first national and independent telegraph networks which at the time were built and operated to different and often incompatible standards. Its agreements cover interconnections, signalling and message protocols, equipment standards, operating instructions, tariffs, accounting and billing rules.


Today every telephone whether it is a new push button phone or an old dial phone, an analogue or digital cordless phone, a mobile phone, a payphone or a proprietary office system phone can be connected to every other telephone in the world. The same network is used to connect fax machines and the telephone message may be analogue or digital. The telephone message may be routed to an office in New York, a remote rural village in China or it can find the called party wherever they might be driving their car in Europe, passing through open overhead wires, underground cables, microwave links, fibre optic links, satellite links, undersea cables or local wireless links on the way. The signalling will be understood, the message will get through and the intermediate organisations carrying the call will get paid for their service.


With the advent of radio and later television, the ITU took on a similar role in managing the use of the radio spectrum, regulating frequency allocations, bandwidths and transmission powers to avoid the possible chaos of millions of transmitters from all over the world interfering with eachother. Despite the finite limitation on the available bandwidth, the ITU's regulatory framework also allows the flexibility to accommodate an ever growing number of users as well as new applications such as radar, cellular phones and GPS satellite navigation and the use of new modulation, multiplexing and transmission technologies as they have been developed to ensure the efficient use of this scarce resource.


The telephone network used to be the biggest machine in the world. Now with the advent of the Internet the machine is even bigger with computers as well as telephones connected together over the same network with modems carrying data and broadband terminals passing data, video and a host of new services down the same old wires and it still all works thanks to the ITU working anonymously in the background.

And all of this has been achieved with the ITU's recognition of "the sovereign right of each State to regulate its telecommunications"


See ISO and the Internet for how NOT to do it.


1866 Almost thirty years after Davenport had built the first practical electric motor using electromagnets in both the stator and rotor, the same technique was applied to the self energising dynamo. A wound rotating electromagnetic armature, replacing the weaker permanent magnet of the magneto, was invented almost simultaneously by Samuel Alfred Varley who's design was patented on 12 December 1866, by Werner Siemens who publicised his design on 17 January 1867, and by Charles Wheatstone who presented a paper to the Royal Society on 4th February 1867 about the principles involved. The design permitted much more powerful and efficient DC generators.

It was later revealed that a patent had been granted in 1854 to Mr. Soren Hjorth, a Danish railway engineer and inventor for a similar invention with self excited armature coils. Hjorth's patent is to be found in the British Patent Office Library.

The principle had also been demonstrated by Hungarian priest Ányos Jedlik in 1861.


The advent of practical dynamos provided a convenient, low cost, inexhaustible source of electric power overcoming many of the limitations of the battery and marked the beginning of electricity generation by electromechanical means rather than by electrochemistry. Rotary generators paved the way for the widespread use of electricity for both high power industrial applications and for consumer appliances in the home.


1867 The reversibility of the dynamo was enunciated by Werner Siemens but it was not demonstrated on a practical scale until 1873 by Gramme and Fontaine.


1867 Kelvin presented to the Royal Society, a paper "On a self-acting apparatus for multiplying and maintaining electric charges, with applications to illustrate the voltaic theory" describing a water powered electrostatic generator.


1867 Swedish chemist Alfred Bernhard Nobel was awarded a patent for the invention of Dynamite. Since 1859, Nobel had been investigating ways of safely manufacturing and handling nitroglycerine, the highly volatile explosive recently discovered (in 1847) by Ascanio Sobrero whom Nobel had known as a fellow student at the University of Paris. This was the first explosive that was more powerful than gunpowder but, though it could be highly effective, it was too unstable and impractical to use. In 1863, Nobel's first invention was the detonator followed in 1865 by the blasting cap, both of which ensured a more controlled explosion of the nitroglycerine and made possible the use of this much stronger explosive. In 1862 Nobel set up a factory to produce safer nitroglycerine, but development was fraught with difficulties. In 1864 an explosion at the manufacturing plant killed his younger brother and four others. Nobel persevered however and despite further accidents he discovered that by adding a powder of kieselguhr, a sedimentary rock consisting of fossilised remains of diatoms, the oily nitroglycerine could be transformed into a safer malleable paste which could be shaped into rods suitable for blasting rock by inserting the explosive into holes drilled into the rock. This was Dynamite.


In 1876 Nobel patented Gelignite, an even more powerful jelly-like explosive, formed from gun-cotton and other explosive materials, such as Sodium or Potassium nitrate, dissolved in nitroglycerine which was more stable and more easily formed into cavities either prepared or available for accepting the explosive charge. Without a detonator it simply burns rather than explodes and is thus reasonably safe to handle.


Nobel's explosives and detonators were soon used worldwide for mining and military applications with production supplied from 16 factories in 14 countries eventually including the Swedish Bofors armaments factory which he acquired in 1894. His development work continued and Nobel was awarded 355 patents for his inventions. All this brought him great wealth the bulk of which (31,225,000 Swedish Krona or £1,687,837) he bequeathed in his last will and testament dated 1895 (one year before his death) to establish the Nobel Prizes, to be awarded annually without distinction of nationality.

He wrote "The whole of my remaining realisable estate shall be dealt with in the following way: the capital invested in safe securities by my executors, shall constitute a fund, the interest on which shall be annually distributed in the form of prizes to those who, during the preceding year, shall have conferred the greatest benefit on mankind".


1867 The first practical typewriter was invented by Milwaukee newspaper editor Christopher Latham Sholes and his colleagues, Carlos Glidden and Samuel W. Soule. Sales did not immediately take off and early designs suffered from clashing and jamming of the keys when fast typing was attempted. At the suggestion of Sholes' financial backer, James Densmore, Scholes re-laid out the keyboard, into what eventually became the familiar QWERTY layout by spacing out pairs of keys which are often used together to avoid jams by effectively slowing down the typist.

Commercial success eventually came when the patents, manufacturing and sales rights were sold to the Remington Arms Company where the design continued to undergo many engineering improvements. One of the innovations was a minor keyboard layout change to replace the "period" key, previously allocated a place on the top row, with the "R" key so that their new brand name "TYPE WRITER" could be typed out from the keys in only one row of the keyboard.

In return for the rights they obtained, Remington offered Scholes and Densmore either cash or royalties from future sales. Scholes took the cash, $12,000, a considerable sum in those days. Densmore took the royalties and eventually received $1.5 million.


1868 Invention of the Leclanché cell Carbon-Zinc wet cell by the French railway engineer Georges Leclanché. It used a cathode of Manganese dioxide mixed with Carbon contained in a porous pot and an anode of Zinc in the form of a rod suspended in an outer glass container. The electrolyte was a solution of ammonium chloride that bathed the electrodes. The Manganese dioxide acts as a depolariser absorbing Hydrogen gas released at the cathode. The first practical battery product to be commercialised, it was immediately adopted by the telegraph service in Belgium and in the space of two years, twenty thousand of his cells were being used in the telegraph system. Later, it was also Alexander Graham Bell's battery of choice for his telephone demonstrations. Domestically however its use for many years was limited to door bells.

Leclanché's electrochemistry was implemented with a different cell construction by Gassner in 1886 to make more convenient dry cells which still survive today in the form of Zinc-Carbon dry cells, the lowest-cost flashlight batteries. Polaroid's PolaPulse disposable batteries used in instant film packs also used Leclanché chemistry although in a plastic sandwich.


1868 Maxwell analysed the stability of Watt's flyball centrifugal governor. Like Airy, he used differential equations of motion to find the characteristic equation of the system and studied the effect of the system parameters on stability and showed that the system is stable if the roots of the characteristic equation have negative real parts. He thus established the theoretical basis of modern feedback control systems or cybernetics.


1868 French engineer Jean Joseph Farcot patented improvements to machine control and in 1873 published a book entitled Le Servo-Moteur introducing the notion of servomechanisms which allow a small control system to control pieces of far heavier machinery.


1869 Prussian physicist Johann Wilhelm Hittorf published his laws governing the migration of ions. These were based on the concept of the transport number, the rate at which particular ions carried the electric current, which he had previously developed. He had noted in 1853 that some ions traveled more rapidly than others. By measuring the changes in the concentration of electrolyzed solutions, he computed from these the transport numbers (relative carrying capacities) of many ions.


1869 German chemist Julius Lother Meyer discovered the periodic relationship between the elements by plotting a graph of atomic weight against atomic volume, however its publication was delayed by the reviewer.

Working at the same time, this periodic relationship was also noticed by Russian chemist Dimitri Ivanovich Mendeleyev. By arranging cards with the names, atomic weights and some properties of the 65 known elements at that time, into rows and columns he noticed an underlying pattern. His Periodic Table of the Elements was published before Meyer's and the Periodic Table thus became attributed to Mendeleyev. Since then over 700 versions of the table have been produced.

Gaps in the table led scientists to speculate on the existence of hitherto unknown elements with predicted properties related to their positions in the table. The existence and properties of these elements was duly confirmed once suitable experiments could be devised.


1869 French paper manufacturer Aristide Bergès built several hydropower machines at Lancey near Grenoble. He directed water from a 200 metre high Alpine waterfall through a Girard impulse turbine, and later from a head of 500 metres through a Pelton turbine to power the machines in his paper mill. He was very active in promoting this energy source as a basis for industrial development in the Alpine valleys. Unfortunately "hydropower" has become confused with "hydroelectric power" and Bergès has been incorrectly credited with the invention, in 1869, of the first hydroelectric power installation, and even with coining the expression "hydroelectric power". In fact the first hydroelectric scheme was implemented in England by William Armstrong in 1878. Bergès did eventually introduce hyrdroelectric power at his plant in 1882 using a Gramme dynamo and these were only invented in 1873. Bergè dubbed this abundant energy source "Huille Blanc" which means "White Oil" in French, but this has almost universally been incorrectly translated as "White Coal" in English language technical publications.


1869 John Tyndall explained that the reason why the sky is blue is because of the scattering of light by dust and large molecules in the upper atmosphere, now known as the Tyndall Effect. He noticed that most light wavelengths pass through the atmosphere unaffected, but the wavelength of blue light is comparable with the spacing of the molecules in the atmosphere which therefore tends to be scatter the Sun's blue light. The effect is more commonly known as Rayleigh scattering, after Lord Rayleigh, who studied it in more detail some years later.


1870 New Yorker John Wesley Hyatt patented the first synthetic plastic, now called Celluloid, which was invented by Parkes in 1855. He first used it as a coating for billiard balls and later for denture plates.


1870 John Player developed a process of mass producing strands of glass with a steam jet process to make what was called mineral wool for use as an effective insulating material. (Editor's Note - It has not yet been possible to verify this first statement which could be an oft repeated internet myth related to the next paragraph. Please email me if you can help. The next statement is true.)


John Player had no connection with John Player cigarettes, a major brand in the 1980s. Nevertheless an unfounded rumour spread in the late 1980s and early 1990s, no doubt encouraged by their competitors, that the filters in John Player cigarettes contained fibreglass resulting in major damage to their market share.


1870s Austrian physicist Ludwig Eduard Boltzmann published a series of papers developing the theory of statistical mechanics with which he explained and predicted how the properties of atoms such as mass, charge, and structure determine the visible properties of matter such as viscosity, thermal conductivity, and diffusion. He showed that the kinetic energy of a molecule of an ideal gas is proportional to its absolute temperature. The ratio is equal to 1.38 X 10-23 Joules per degree Kelvin (J/K) and is called the Boltzmann Constant in his honour.


Boltzmann also derived a theoretical relationships for the thermodynamic entropy of a gas. 70 years later Shannon used an equivalent relationship to define the information entropy in a message.


Tragically ill and depressed, Boltzmann took his own life in 1906.


1871 Weber proposed the idea for atomic structure that atoms contain positive charges that are surrounded by rotating negative particles and that the application of an electric potential to a conductor causes the negative particles to migrate from one atom to another creating current flow.


1871 German scientist Steiner revived an apparently dead patient by passing a weak electrical current directly through his heart. The first recorded use of electric shock treatment for reviving people after cardiac arrest.


1871 After witnessing a death from smoke inhalation, John Tyndall invented the fireman's respirator or gas mask. See also Stenhouse.


1872 PVC, Poly Vinyl Chloride first created by German chemist Eugen Baumann. It was not patented until 1913. In 1926 Waldo Semon invented a new way of making PVC into a useful product and he is now generally credited with discovering it.


1872 One of the many "Fathers of Radio" West Virginian dentist Mahlon Loomis was granted a patent for "a new and Improved Mode of Telegraphing and of Generating Light, Heat, and Motive Power". Although not a true radio system it was an attempt at making a wireless telegraphy system by replacing the batteries with electricity gathered from the atmosphere by means of flying kites attached to long copper wires. It used a Morse key between one kite wire and the ground to send signals and at the remote kite it used a galvanometer between the wire and the ground to detect the signals. It is claimed that signals using this method were transmitted over 14 miles, however it is questionable whether this system ever worked and it was never commercially exploited. Nevertheless the Guinness Book of Records credits Loomis with sending the first signals through the air. It was another sixteen years before Hertz demonstrated the existence of radio waves.


1872 American telecommunications engineer Joseph Barker Stearns of Boston developed the first practical telecommunications duplexing system. He accomplished this by using two different types of signals, one for each direction. In one direction he used varying strength signals (e.g. On or Off) which he detected with a common or neutral relay, while in the opposite direction he used varying polarity signals (Plus or Minus) which he detected with a polarised relay. The receivers were designed to respond only to signals of the appropriate type from the remote transmitter and to ignore local transmissions. Stearns' system effectively doubled the capacity of the installed telegraph lines and Western Union rapidly acquired rights to use it.


1872 British electrical engineer Josiah Latimer Clark invented the Clark Standard Cell which provided a reference voltage of 1.434 volts at 15 °C. The cathode was Mercury, in contact with a paste of Mercurous Sulphate, and the anode was Zinc amalgam in contact with a saturated solution of Zinc Sulphate.


1872 American mechanical engineer George Brayton patented his Ready Motor, a continuous combustion, two cylinder, two stroke, kerosene (paraffin) engine. It used a rocking arm coupled to a flywheel to drive the pistons alternately up and down. One piston was used to compress the air which was then mixed with a controlled amount of fuel and ignited by a continuous flame in a combustion chamber and fed into the second chamber where the hot gases expanded providing the power stroke. The modern gas turbine uses the same three fundamental components of Brayton's system, a compressor, continuous combustion burner and an expansion chamber from which work can be extracted and the thermodynamic cycle on which it based, heat addition at constant pressure, is now called the Brayton cycle. Brayton himself never made anything other than piston engines.

See also Gas turbines and Heat engines.


1873 Scottish physicist James Clerk Maxwell published his "Treatise on Electricity and Magnetism" in which, using a water analogy, he unified the laws of electricity and magnetism, distilling all electromagnetic theory into a set of four rules now accepted as one of the fundamental laws of nature. Now known as Maxwell's Equations, they were one of the most important scientific works of the century, not only explaining all electric, magnetic and radiation phenomena known at the time but also providing the theory describing light waves as well as the foundations for the two great theoretical advances of the twentieth century, relativity and quantum theory.

Maxwell's four equations express, respectively:

  • How electric charges produce electric fields - Gauss' Law for electric fields.
  • The absence of single magnetic poles. North and South magnetic poles always appear in pairs and the total magnetic charge is always zero. - Gauss' Law for magnetic fields.
  • How currents produce magnetic fields - Ampere's Law with an additional term called the displacement current showing that a changing electric field is equivalent to a current also inducing a magnetic field.
  • How changing magnetic fields produce electric fields - Faraday's Law of induction.

In mathematical vector form these complex relationships can be expressed very simply as:-

 

∇• D = ρ       or alternatively   ∇• E = ρ/ε0

∇• B = 0

∇x H = J + δD/δt

∇x E = - δB/δt

Where
ρ is the free electric charge density (not including dipoles)
D is the electric displacement field or flux density = ε0E
B is the magnetic flux density = µ0H
H is the magnetic field
J is the current density
E is the electric field
∇• is the divergence operator
∇x is the curl operator
ε0  is the electric permittivity of a vacuum
µ0  is the magnetic permeability of a vacuum


Maxwell originally expressed his theory in 20 partial differential equations. They were subsequently simplified in 1884 by Oliver Heaviside who expressed them in vector form which is the form in which they are shown above.


As some physics teachers are fond of saying:

"The Lord said Let there be light and there were Maxwell's equations"


These four equations provided the theoretical justification of his 1864 predictions of the existence of radiation or electromagnetic (radio) waves, even though at that time there was still no evidence to demonstrate that such a phenomenon existed.

Maxwell showed that electromagnetic fields hold energy which is in every way equivalent to mechanical energy and that a changing magnetic field will induce a changing electric field which in turn induces a changing magnetic field, and so on, such that an electromagnetic wave is created in which the energy oscillates between the electric and magnetic fields.

He also showed that neither the electric wave nor the magnetic wave can exist alone. They travel together, always at right angles to, and in phase with eachother.

The velocity of propagation of the electromagnetic wave v can also be derived from Maxwell's equations as v = E/B the ratio between the electric field strength E and magnetic flux density B which is also equal to 1/√µ0ε0. From a knowledge of the magnitudes of µ0 and ε0 he determined that the velocity of propagation of the electromagnetic wave is constant and equal to the speed of light and that light is an electromagnetic wave.

Together with Lorentz law describing the forces on charged particles, Maxwell's equations form the basis of the theory of electrodynamics.


It is a measure of Maxwell's genius that with four elegant and concise equations he could not only account for the movement of a compass needle next to a current carrying wire but with the same equations he was also able to predict, understand and correctly characterise mathematically such a complex phenomenon as electromagnetic radiation that nobody had yet witnessed or even imagined.

Maxwell was initially encouraged and supported in his theories by Kelvin, upon whose earlier work he built, however in his lifetime Kelvin never accepted Maxwell's conclusions believing them too theoretical and not related to reality.

It was 1888 before his predictions were proved right by experiments carried out by Heinrich Hertz.

In the twentieth century, while Einstein's relativity theory required Newton's laws to be modified, Maxwell's equations remained absolute.


See more about Electromagnetic Radiation and Radio Waves today.


Maxwell also introduced statistical methods into the study of physics, now accepted as commonplace and made significant contributions to structural analysts, feedback control theory (cybernetics) and the theory of colour, taking the first ever colour photograph.


Maxwell was a kind and modest man, universally liked. His ideas were ahead of his time but he made no attempt to promote his work. Despite his monumental achievement, it was Hertz' name rather than Maxwell's that has become associated with radio waves and radio propagation.

He died of stomach cancer in 1879 at the age of forty eight without seeing the experimental confirmation of his theories.


Like several Victorian scientists, Maxwell used poetry to describe his interests and his work. 43 of his poems on such riveting subjects as "A Problem In Dynamics", "British Association, Notes Of The President's Address", "To The Committee Of The Cayley Portrait Fund" and "Torto Volitans Sub Verbere Turbo Quem Pueri Magno In Gyro Vacua Atria Circum Intenti Ludo Exercent" about spinning tops, were published in 1882, after his death, by his friend Lewis Campbell.


Quotations about Maxwell:

When Michael Faraday was asked what was his greatest ever discovery he replied "James Maxwell"


"The Special Theory of Relativity owes its origins to Maxwell's Equations of the Electromagnetic Field" - Albert Einstein.


"Ten thousand years from now, there can be little doubt that the most significant event of the 19th century will be judged as Maxwell's discovery of the laws of electrodynamics" - Richard Feynman


1873 Belgian carpenter and instrument maker Zénobe Théophile Gramme in partnership with French engineer and inventor Hippolyte Fontaine developed the first reliable commutators for DC machines. (The commutator is the device which reverses the current in the rotor coil as it passes from the influence of one magnet pole to the next magnet pole of opposite polarity in order to maintain a unidirectional current in the external circuit).


They also demonstrated the reversibility of their dynamo by pumping water at the Vienna International Exhibition using two dynamos connected together, one, the generator, deriving motion from a hydraulic engine, provided electrical power to the receiving dynamo which worked the pump. It is said that they discovered the phenomenon by accident when an idle dynamo was mistakenly connected across another working/running dynamo and began motoring backwards. They did however realise that the importance of their discovery was not just the reversibility of the dynamo, but also the possibilities electrical power transmission. The fact that electrical power could be generated in one place and used in another.


1873 The first demonstration of electric traction in a road vehicle by Robert Davidson in Edinburgh using Iron/Zinc primary cells to drive a truck.


1873 English telegraph engineers, Joseph May and Willoughby Smith, while working with Selenium, noticed that its conductivity changed under the influence of light thus discovering the photoconductivity effect.


1873 Dutch physicist Johannes Diederik van der Waals deduced more accurate gas laws taking into account the volume of the actual molecules making up the gas and the intermolecular forces between them. The van der Waals forces, named after him, assumed that neutral molecules behaved like dipoles with a positive charge on one side and a negative charge on the other because their shape was distorted. The true nature of the forces between molecules was later explained in 1930 by Polish-born physicist Fritz London using quantum theory.

Van der Waals was awarded a Nobel Prize in 1910 for his work on the equation of state for gases and liquids.


1874 A thermo-electric battery based on the Seebeck effect powered by a gas heater introduced by M Clamond in France. Known as the Clamond pile or thermopile, it consisted of a stack of circular arrays of junctions of iron with a zinc-antimony alloy heated by a gas burner located in the centre of the stack. It generated 8 Volts providing a current of 2 to 3 Amps and supplied both heat and electricity to galvanising baths.


1874 Thomas Alva Edison invented the quadruplex telegraph, which was capable of sending four Morse coded messages simultaneously on a single channel. He amalgamated and rearranged the duplexer of Gintl, and Farmer and the diplexer of Stearns into a single system permitting two messages to be sent in each direction. As with Gintl's duplexer design, two relays in each terminal were unresponsive to outgoing signals, one of these relays responded to current increases of the incoming signals the while the other responded to current reversals of the received signals. Thus Stearns duplexing method of distinguishing between two signals was modified by Edison to separate the signals going in the same direction (diplexing) rather than in opposite directions (duplexing). This avoided the problem of synchronising the receivers with the transmitters. The quadruplex allowed the telegraph lines to carry four times the traffic and saved the telegraph companies millions of dollars.


Edison had started the development of his quadruplex system in 1873 in cooperation with Western Union using their facilities for his experimental work. He had agreed with William Orton, the president of Western Union, a development fee and that the patents for the design would be assigned to Western Union. When the design was complete Edison was given $5000 as part payment and $25,000 later. Orton also authorised a royalty payment to Edison of $233 per year before leaving on a business trip. While he was away, Edison was approached by George Jay Gould, railroad baron, Wall Street financier, stock manipulator and head of Atlantic and Pacific Telegraph Company, an arch rival of Western Union. He offered Edison $30,000 cash for the quadruplex patents and a job at Atlantic and Pacific. Edison accepted and wrote to Orton saying their arrangement had been a mistake and he revoked the assignment of patents to Western Union. Edison had sold the patents twice over. This earned him the title of "Master of Duplicity and Quadruplicity" bestowed on him by New York journalists. There followed years of litigation which only ended with the eventual amalgamation of the two telegraph companies. A portent of Edison's business methods to come. See Edison and Tesla.


Quadruplex telegraphs were eventually displaced by two new inventions, Baudot's multiplex telegraphy capable of eight or more simultaneous transmissions (see next) and Murray's teleprinter machines which did not use Morse code.(See following entry - Baudot code).


Edison set up his first small laboratory and manufacturing facility in Newark, New Jersey in 1871 to produce new designs for Western Union and others. In 1876 he moved to a larger facility at Menlo Park equipped to work on any invention opportunities he might turn up. This was the world's first industrial research and development facility and was where Edison's phonograph, light bulb and electrical power systems were developed. See more about Edison's Inventions.


1874 Jean Maurice Émile Baudot, an officer of the French Telegraph Service made major improvements in the telegraph system by bringing together the five unit code devised by Gauss and Weber, now called the five bit Baudot code, and the synchronous time division multiplex (TDM) system, proposed by Farmer in 1852, into a practical design for a printing telegraph.

The five bit code was the first truly digital code, each unit having only two logical states which he represented as + and -, later replaced by the more familiar 1 and 0 digital logical states we know today. This enabled 32 possible combinations or characters, the shortest practicable code for the number of characters to be transmitted. To enable a full alphanumeric code, Baudot used two special characters to switch between letters and numbers giving effectively 64 combinations, enough to allow for 26 characters for the alphabet and 10 numbers plus other miscellaneous punctuation and synchronisation codes. Input was by 5 keys. Later adaptations by Murray in 1903 (and others) used five hole punched tape to input the characters with a sixth row of smaller sprocket holes to feed the tape through the reader. The tape had the advantage that it could be punched off line and subsequently transmitted at high speed, but more importantly the tapes enabled the transmission speed to be controlled thus facilitating the multiplexing. Early teletypewriters also used Baudot code which eventually supplanted Morse code as the most commonly used telegraphic alphabet becoming known as the International Telegraph Code No.1.

Although the code is now named after Baudot, the five digit binary code was first proposed by Francis Bacon in 1605.


The Baudot distributor enabled four messages to be transmitted simultaneously. Multiplexing was achieved by using synchronised motors at either end of the line with brushes which connected each channel sequentially, for a fixed interval, to a single transmission line as the motor rotated. Synchronisation codes were sent down the line to keep the transmitter and receiver in step.

In modern circuits TDM is accomplished by interleaving the bit streams from the different channels.


The unit of measurement for data transmission rates of one character per second is named the Baud, a shortened form of Baudot, in his honour.


1874 German physicist Karl Ferdinand Braun discovered one way conduction in metal sulfide crystals. He later used the rectifying properties of the galena crystal, a semiconductor material composed of Lead sulfide, to create the crystal detector used for detecting radio signals which Braun worked on with Marconi. Thus was born the first semiconductor device. Now called the diode, the cat's whisker detector was rediscovered and patented 30 years later by Pickard and Dunwoody.


1874 Irish physicist George Johnstone Stoney expanding on Faraday's laws of electrolysis and the notion that an electric charge was associated with the particles deposited on the electrodes during electrolysis, proposed that the minimum unit of charge was that which was found on the hydrogen ion and that it should be a fundamental unit. He named it the "electrine". In 1891, he changed the name to "electron". He calculated the magnitude of this charge from data obtained from the electrolysis of water and the kinetic theory of gases. The value obtained later became known as a coulomb. Stoney was unaware of the nature of the atom and "Stoney's electron" is a unit of charge, not to be confused with J.J. Thomson's sub atomic particle which Thomson called a corpuscle but which we now call the electron.


1874 David Salomons of Tunbridge Wells, England demonstrated a 1 H.P. three wheeled electric car powered by Bunsen cells.


1875 American physicist Henry Augustus Rowland was the first to show that moving electric charge is the same thing as an electric current.

He built up an electrostatic charge on a rotating gramophone (phonograph) record by rubbing it with woolen cloth. A magnetic compass bought in close to the spinning disk was deflected, the magnitude of the deflection increasing with the speed of the disk. This showed that a magnetic field is not only set up by a current moving through a wire but also by a moving electrostatic field.


1876 On March 10 in Boston, Massachusetts, Alexander Graham Bell, a Scottish emigré to the USA, invented the telephone. Bell filed his application just hours before his competitor, American inventor Elisha Gray, founder of Western Electric, filed notice with the same patent examiner, an outline of a telephone he planned to patent himself. What's more, neither man had actually built a working telephone. Bell in particular did not have a working microphone but he made his telephone operate three weeks later using the microphone described in Gray's Notice of Invention, and methods Bell did not propose in his own patent. Being a "system" using several technologies over which Bell claimed sole rights, it spawned more than 600 law suits mostly focused on whether the concept of modulating a DC current supplied by a battery was revolutionary or insubstantial and which of the many rivals had thought of it first. Legitimate claimants included Belgian experimenter Charles Bourseul 1854), German schoolmaster Johann Philipp Reis (1861) and impoverished Italian US immigrant Antonio Meucci (1861) to whom the idea is now officially credited by the American Congress (disregarding the prior work of Reis).

Bell's United States Patent No. 174465 became recognized as the world's most valuable patent.


Similar controversies surround the invention of radio, but that's another story.


In an attempt to find an assassin's bullet lodged in the body of US President James Garfield, in 1881 Bell hastily devised a crude metal detector based on the induction balance recently devised in 1879 by David Hughes. It worked but it didn't find the bullet, indicating that it was deeper than at first thought. It was later discovered that the detector had been confused by the newly invented metal bed springs under the mattress on which the President lay. (The President died after eighty painful days from complications arising from contamination of, and further damage to his wound by the dozen or more doctors probing his body in search of the bullet).


Bell's father, grandfather, and brother had all been associated with work on elocution and speech, and both his mother and wife were deaf, profoundly influencing Bell's life's work. It was his research on hearing, speech and sound transmission which eventually led him to the invention of the telephone.

In 1877 Bell married Mabel Gardiner Hubbard, a student from his school for the deaf, the daughter of Boston lawyer Gardiner Greene Hubbard. Hubbard senior, helped Bell set up the Bell Telephone Company with himself as president ably defending the company from the avalanche of lawsuits it faced.


In later life Bell moved to the relative seclusion of his estate in Nova Scotia where he declared himself to be sick of the telephone which he regarded as a nuisance, referring to it as a "beast". He crusaded tirelessly on behalf of the deaf and worked on a variety of projects including flight and aerofoils. At odds with his genuine concern for the deaf, he was an advocate of eugenics and carried out experiments with sheep. He was convinced that sheep with extra nipples would give birth to more lambs, and built a huge village of sheep pens, spending years counting sheep nipples, before the US State Department announced that extra nipples were not linked with extra lambs.


1876 Most Iron ores, particularly those from Europe, contain substantial amounts of Phosphorus which makes the steel produced from it very brittle. Up to that time it had been necessary to use costly phosphorus free ores from Wales and Sweden in the Bessemer converters used to produce the high quality steel. Welsh metallurgist Sidney Gilchrist Thomas discovered that by adding a chemically basic (alkaline) material to the Bessemer converter it can draw Phosphorus impurities from the pig iron into the slag which is skimmed off, resulting in phosphorus-free steel. He later patented this process which was called the "Basic" Bessemer Process. Bessemer consequently replaced the original siliceous (acidic) refractory lining of his retorts with a limestone lining which produced lime (a base) when heated.

Thomas's innovation meant that iron ore from anywhere in the world could be used to make steel resulting in significant savings in production costs.


Thomas died of lung disease at the age of 34 in 1885.


See also Iron and Steel Making


1877 The telephone industry created the next major leap forward in the demand for batteries.


In Bell's original 1876 system the microphone was a passive transducer in which the acoustic power of the human voice provided the energy to create the varying electric currents which represent the sound and also to carry them down the wire to the receiver. In Bell's microphone, or transmitter in telephone parlance, sound waves impinge upon a steel diaphragm causing it to vibrate in sympathy. The diaphragm is arranged adjacent to the pole of a bar electromagnet and acts as an armature. The vibrations of the diaphragm cause very weak electrical impulses to be induced in the coil of the electromagnet. However these feeble signals were quickly attenuated as they passed down the telephone line until they were inaudible, severely limiting the range of the circuit and hence the potential of the telephone system.


During 1877 and 1878 German born American Emil Berliner, David Hughes, Thomas Edison, Bell employee Francis Blake and English curate Henry Hunnings, were each working independently on designs for improved microphones based on active transducers in which the acoustic power controls an external source of power. An active transducer provides an electrical signal with about a thousand times more electrical power than the acoustical power absorbed by the transducer and their designs considerably improved the range of the telephone at the expense of requiring power from a local battery. They all used variants of a Carbon transducer which depend on the fact that the electrical resistance of some materials varies with the physical pressure exerted on it, various forms of Carbon material, such as carbon granules, coke or lamp black being particularly sensitive. In the carbon microphones which they developed, during the call the battery current flows constantly in a closed circuit across a capsule of carbon material between two terminals one of which is a flexible diaphragm. The sound pressure variations are transferred to the carbon by the diaphragm thus causing the battery current to vary in response to the sound pressure. Edison's design used lamp black and had the added refinement of an induction coil or step up transformer which superimposed the sound information from the transducer on to a separate higher level DC current flowing through the secondary winding of the coil in the main transmission line so that an amplified signal appeared across the terminals of the secondary coil and the stronger DC current carried it further. A process we now call modulation.


Rather than patenting his ideas for the microphone, Hughes, who was already wealthy from his invention in 1855 of the printing telegraph, communicated his designs to the Royal Society in the February 1878 and generously gave the carbon microphone to the world. This earned him the wrath of Thomas Edison who laid claim to the invention, accusing Hughes of plagiarism and patent infringement. Two months later Berliner and Edison filed for patents on carbon microphones within two weeks of each other resulting in numerous bitter law suits which were eventually settled out of court. Hunnings patented the idea of using carbon granules which could carry higher currents but his patent was challenged by Edison's lawyers. Being a man of limited means he conceded and sold the rights for £1000 and went on Edison's payroll. Berliner went to work for Bell who bought his design for $50,000 and Edison's design, based on principles described by Hughes but using Hunnings' crushed Carbon granules became the basis of the standard telephone transmitter and with a few refinements was used for over a hundred years.


Berliner went on to found Deutsche Grammophon Co. and his trademark image became a painting by English artist Mark Barraud of his dog "Nipper" listening to His Master's Voice for which Barraud was paid £50 for the painting and a further £50 for the full copyright. Berliner's other notable invention was the gramophone using a flat disk instead of the cylinder used by Edison.


1877 English experimenter Williams Grylls Adams and his student Richard Evans Day discovered that an electrical current could be created in Selenium solely by exposing it to light and produced the first Solar Cells naming the currents produced this way photoelectric. Although the effect was attributed to the properties of Selenium it was in fact due to the properties of the junction between the Selenium, now known to be a semiconducting material, and the Platinum metal used to create the connection for measuring the current.


Note: Confusingly the currents produced by solar cells, named photoelectric currents by Adams and Day, do not arise from the photoelectric effect in which light causes electrons to be emitted from the surface of the material by the process of photo-emission. Solar cells or photovoltaic cells are made of semiconductor material. The incoming light (photons) moves electrons from the valence band across the band gap to the conduction band and the resulting electron-hole pairs cause an internal electrical field to be set up across the P-N junction which separates them. In this way different charges on the two electrodes of the solar cell are created, and this potential difference can be used to drive a current through a wire.


It was not until 1954 that the efficiency of photovoltaic cells was improved enough to generate useful power.


1877, German, Ernst Siemens patented the first loudspeaker before the advent of electrical music reproduction.


1878 Electric alternator invented by Gramme and Fontaine.


1878 American physical chemist Josiah Willard Gibbs developed the theory of Chemical Thermodynamics introducing the free energy concept. When a chemical reaction occurs, the free energy of the system changes. The free energy is the amount of energy available to do external work, ignoring any changes on pressure or volume associated with the change of state. Thus the change in Gibbs free energy represents the total useful energy released by the chemical action which can be made available for doing work. When the free energy decreases, the entropy always increases, and the reaction is spontaneous. (The value of the free energy lies in the fact that its change is easier to measure than the change in entropy.)

He also developed fundamental equations and relationships to calculate multiphase equilibrium and the phase rule which specifies the minimum possible number of degrees of freedom, or variables such as temperature, pressure, concentration etc. in a (closed) system at equilibrium which must be specified , in terms of the number of separate phases and the number of chemical constituents in the system, in order to completely describe the state of the system. Gibbs' work laid the foundations for the theoretical representation of the energy transfers involved in chemical reactions. This allowed the performance (energy release) of galvanic cells to be quantified and predicted.

He published his work in the Transactions of the Connecticut Academy of Arts and Sciences, an obscure publication, published by his brother in law, with a very limited, mostly local, circulation. His work on thermodynamics, a major advance in the understanding of chemical reactions, therefore remained unknown until 1883, when Wilhelm Ostwald a Russian-German physical chemist discovered it and translated it to German.

In 1881 Gibbs published "Elements of Vector Analysis" which presents what is essentially the modern system of vector analysis. It permitted the presentation and analysis of complex relationships between multi-dimensional forces such as Maxwell's field theory to be simplified by the use of Gibbs' vector notation and methods. He also made important contributions to the electromagnetic theory of light. His later work on statistical mechanics was also important, providing a mathematical framework for quantum theory.

For all his major contributions to science, Gibbs was a modest man like Maxwell who shunned fame and fortune, living a quiet and contented, simple life as a bachelor, much admired by his students at Yale where he worked.


1878 French electrician, Alfred Niaudet, published "Traité élémentaire de la pile électrique" on electric batteries in which he described over a hundred different battery types and combinations of elements, indicating the growth and importance of battery technology.


Niaudet described the various chemical mixes and designs which had been used to address a range of design goals. The polarisation problem was solved by using non polarising chemical mixes which did not produce gases, or by using mixes which included depolarising agents or oxidants, which reduced any Hydrogen emissions by combination with Oxygen. Other recipes were used to achieve higher cell voltages, higher capacity, lower costs or longer life. Alternative constructions were designed to improve the convenience of use and current carrying capability or to reduce the cell's internal resistance. Later the possibility of electrical recharging became a design aim.

Examples not mentioned elsewhere on this web site are given below.


Non polarising 2 Volt primary cells were mostly based on Potassium dichromate and often used two electrolyte gravity cells (See below). Examples were:

  • 1840 Grenet's single electrolyte Potassium dichromate "Bottle" cell with adjustable Carbon and Zinc electrodes, favoured by Edison for his domestic lighting systems.
  • Voisin and Dronier's Potassium dichromate "Bottle" cell, a variation on the Grenet cell with different electrode controls.
  • 1842 Poggendorff 2 electrolyte cell, similar to the Bunsen cell but with Potassium dichromate replacing the nitric acid.
  • 1852 John Fuller's patented "gravity cell" which had a Zinc cathode whose base was immersed in liquid Mercury, in a porous container with a dilute sulphuric acid solution. The anode was Carbon, surrounded by orange-red Potassium dichromate solution and crystals, again in sulphuric acid. Similar cells were patented by Leffert. The following year Fuller improved on Daniell's original design to provide the Daniell cell chemistry as we know it today by replacing the aggressive sulphuric acid electrolyte with the more benign Zinc sulphate prolonging the life of the cell. He also used the gravity cell construction and the design became very popular for telegraph applications.
  • 1854 Gravity cell proposed by C. F. Varley
  • Radiguet 2 electrolyte cell with electrodes of Mercury and Zinc and electrolytes of sulphuric acid and Potassium dichromate.
  • Guiraud 2 electrolyte cell, a low cost cell with electrodes of Carbon and Zinc and electrolytes of brine and Potassium dichromate

Potassium dichromate is strongly toxic and these cells consequently fell into disuse.

Gravity cells are two electrolyte cells which depend on a lighter electrolyte, such as Zinc sulphate, floating on the top of a heavier electrolyte, such as Copper sulphate, like oil and water. Normally, diffusion would soon mix the two liquids destroying the cell's efficacy, but if a current was drawn continuously the natural migration of the ions kept the electrolytes apart. This construction reduced the internal resistance of the battery by eliminating the porous pot from the current path. Gravity cells were used extensively in the telegraph and telephone industry, however the inconvenience of keeping the cells undisturbed to avoid mixing the electrolytes and also above freezing temperatures eventually led to their replacement.

Gravity cells which used Zinc electrodes suspended in Zinc sulphate or sulphuric acid were also called Crowfoot Cells because the shape of the Zinc electrode resembled the bird's foot.


Other non polarising primary cells such as the Daniell cell were two electrolyte cells based on Copper sulphate and sulphuric acid electrolytes. These included designs by the following inventors:

  • Smée whose cell was the fore-runner of this class. It used Zinc and Copper electrodes and the Copper electrode was coated with finely-divided Platinum intended to cause the evolved Hydrogen to form bubbles and detach themselves. An imperfect solution, but the cell was nevertheless popular in the electroplating industry.
  • Carré who replaced Daniell's porous pot with a parchment membrane.
  • Callaud, who in the 1860s, eliminated the porous cup in the Daniell cell perfecting the gravity cell construction.
  • Hill similar to the Callaud cell.
  • Meidinger whose design was popular in Germany. It used the Callaud chemistry but with a construction which was much easier to maintain.
  • Verité
  • Minotto who developed a gravity cell in 1862, based on Daniell's chemistry, for tropical use. It was used by the Indian PTT.
  • Essick whose cell was designed to operate at 70°C to achieve higher current outputs.
  • Tyer who patented a mercurial battery with silver and Mercury-covered zinc in dilute sulphuric acid.

These cells all produced only 1 Volt which made them less attractive than the 2 Volt dichromate cells.


Many batteries at that time used elemental Mercury for contacts or for preventing local action at the Zinc electrodes. Impurities in Zinc, such as iron or Nickel, effectively created minute short-circuited cells around each grain of impurity which soon ate away the Zinc. Pure zinc was far too expensive to be considered at that time, however in 1835 William Sturgeon discovered that the local action in the cheaper impure Zinc could be eliminated if the Zinc electrodes were amalgamated with liquid Mercury.

In 1840 Sturgeon developed a long lasting battery consisting of a cast Iron cylinder into which a rolled cylinder of amalgamated Zinc was placed. Discs of millboard were used as separators and the electrolyte was dilute sulphuric acid.


Depolarising cells from the same period were usually based on nitric acid with a cell voltage of 1.9 Volts and included:

  • 1839 Grove cell, the first depolarising cell, it was a two electrolyte cell with nitric and sulphuric acid electrolytes and Platinum and Zinc electrodes
  • 1841 Bunsen cell, similar to Grove's cell, it replaced expensive platinum with cheaperCarbon
  • 1853 Farmer cell, similar to Grove's cell with improved design of the porous pot.
  • 1854/5 Callan cell, the Maynooth battery, a two electrolyte cell. Expensive Platinum or unreliable Carbon cathodes were replaced by cast iron. The outer casing was cast iron, and the zinc anode was immersed in a porous pot in the centre.
  • Other variants on this theme were developed by Lansing B. Swan, Thomas C. Avery, Christian Schönbein, Archeneau, Hawkins, Niaudet, Tommasi and d'Arsonval.

Although these cells were popular, the acid decomposed rather than polarising the cell giving off toxic nitric dioxide gas which eventually led to their demise.


Other developments included:

  • The de la Rue Silver chloride cell whose constant voltage and small size made it popular for medical and testing applications. The electrodes consisted of a small rod or pencil of Zinc and a Silver strip or wire coated with Silver chloride and sheathed in parchment paper. The electrolyte was ammonium chloride contained in a closed glass phial or beaker to avoid evaporation.
  • The Schanschieff battery which used Zinc and Carbon electrodes and an electrolyte of Mercury sulphate. It was suitable for portable applications such as reading and mining lamps.

All of the above cells were primary cells, but most were designed for re-use. In general, they used aqueous electrolytes enclosed in stout containers, often made of glass. Once the cell was discharged the spent chemicals could be replaced or replenished: - a form of mechanical recharging. High volume users such as the telegraph and telephone companies pioneered recycling, working with their battery suppliers to reprocess and recover expensive elements from the used electrolytes. (In 1886 Western Union recovered 3000 pounds of Copper in this way.)


A further impetus was given to the search for alternative chemistries after 1860 when Gaston Planté demonstrated the feasibility of rechargeable cells with his Lead Acid battery.

All of the above primary cells were eventually superceded for PTT use by versions of Planté's rechargeable battery or by mains power.

For portable power, the Leclanché cell was one of the few surviving primary cells from this period.


1878 In a letter sent to the publication "Mechanic and World Science" Irish experimenter Denis D. Redmond described a 10 by 10 array of Selenium photocells each connected to a corresponding array of platinum wires which would glow when light impinged on the photocells. The system was the first to provide electric transmission of moving images, albeit silhouettes, only one year after the discovery of the photovoltaic effect and one year before Edison patented his light bulb. The system had no image scanning (later provided by Nipkow) so it required 100 channels to transmit the image. Nevertheless it was the forerunner of the modern television system.


The same year Portuguese professor Adriano de Paiva published "La téléscopie électrique basée sur l'emploi du sélénium" in a Portuguese publication "Commercio Portuguez", curiously written mostly in French with some Portuguese. It described a similar system to Redmond's which he called an electric telescope anticipating a different application from what eventually transpired.

It was another five years before practical photovoltaic cells were invented by Fritts.


1878 The bolometer, a very sensitive device for measuring very low levels of incident electromagnetic radiation, including infrared radiation, was invented by Samuel Pierpoint Langley. It works by measuring the heating effect of the radiation on the resistivity of a suitable conductive material. The name comes from the Greek bole ray of light, stroke, from ballein to throw.

Langley's bolometer used a Wheatstone Bridge with a sensitive galvanometer to measure the differential resistance between two Platinum strips coated with Carbon black, one exposed to the radiation and the other shielded from it. It was sensitive enough to detect the thermal radiation from a cow a quarter of a mile (400 m) away.

Modern bolometers use semiconductor or superconductor absorptive elements to pick up the radiation and are able to detect changes in temperature of less than 1/100,000 of a degree Celsius. They are commonly used to measure of the amount of solar energy reaching the Earth.


See also Langley's contribution to aviation.

See more about thermo-electricity.


1878 The invention which did more than any other to promote the use of electricity in the home, the incandescent electric light bulb, was patented in the UK by English physicist and chemist, Joseph Wilson Swan in 1878 and the following year in the USA also by American, Thomas Alva Edison. (See following item).

Swan started his development of incandescent lamps in 1848 using Platinum filaments but, because of the high cost of Platinum and its short life before failing, he switched to Carbon which could withstand the heat better. By 1860 he demonstrated a working device, and was granted a British patent covering a partial vacuum, carbon filament, incandescent lamp almost twenty years before Edison. Carbon unfortunately burns in the presence of oxygen and so must be enclosed in a vacuum and Swan's lamps still suffered from short lifetimes because of the difficulty of achieving a high enough vacuum. By 1878 however, vacuum technology had advanced sufficiently and Swan was able to produce and patent a reliable carbon filament lamp.


1878 The world's first electric power station was built by Sigmund Schuckert at the instigation of King Ludwig II went into operation in the Bavarian town of Ettal. It contained 24 dynamo electric generators based on a design by Siemens driven by a steam engine. It was used to power an array of Siemens carbon arc lamps to illuminate the Venus Grotto in the gardens of Ludwig's Linderhof Palace.


1878 Swedish engineer Karl Gustaf de Laval invented the centrifugal separator, the first practical bulk method for separating cream from milk. The milk container was spun up to over 1000 r.p.m. by a hand crank driving through a worm geared mechanism. He later applied the same principle of centrifugal force to the manufacture of glass bottles.


He is perhaps better known for his invention in 1882 of the impulse steam turbine, now named after him, which ran at a speed of 30,000 r.p.m.. The energy delivered to the impulse turbine rotordepends on the kinetic energy of the steam rather than its pressure so that de Laval's turbine could work over a wide range of steam pressures, however it needed to run at very high speed to achieve reasonable efficiency. A major component of the innovation was the development of the nozzle needed to deliver high kinetic energy steam to the turbine blades. It was a counter intuitive design with a convergent-divergent, hourglass, shape which could increase the velocity of the steam jet to supersonic speeds. The de Laval flared nozzle principle is now almost universally used in high speed gas jet applications including rocket engine exhausts.

Unfortunately at the time, there were few available materials needed to handle the mechanical forces associated with the high speed design and the turbine initially achieved l imited success. Centrifugal forces on the rotor are immense, there were no suitable bearings for carrying the heavy, high speed rotor, and the rotation speed was too fast for most applications so that complex reduction gearing was required. All of these problems were eventually solved and turbines based on de Laval's designs are quite common today.


See diagrams and principles of de Laval's Turbine and Nozzle.


Gustaf de Laval was a prolific designer but a poor businessman and, despite his 92 patents and 37 companies, he died in extreme poverty.


See also Armstrong's hydro-electric scheme which also came on stream the same year.


1879 After an intensive search, starting in 1878, for suitable incandescent materials Thomas Alva Edison patented the Carbon filament, incandescent electric light bulb in the USA.


History is written by the winners and a certain mythology has built up around Edison's inventive genius. The light bulb itself is synonymous with bright ideas but also with Thomas Edison himself. Forgotten however is English experimenter Warren de la Rue's 1840 incandescent lamp using a Platinum filament in a partially evacuated glass tube. Forgotten also are all the previous patents for electric lights similar to Edison's using carbon filaments in evacuated bulbs or bulbs filled with inert gas. These included American John W Starr from Cincinnati who was granted a UK patent in 1845 for a Carbon filament incandescent lamp which he successfully demonstrated to Michael Faraday. Unfortunately Starr was found dead in bed the day after the demonstration at the age of 25, it is said, of "excitement and overwork of the brain" and nothing further became of his invention. Forgotten too are the similar inventions of Alexander Lodygin in Russia (1872), Henry Woodward and Matthew Evans in Canada (1874) and Joseph Swan in England (1878) (See previous two items) who demonstrated an almost identical lamp to the Newcastle Literary and Philosophical Society eight months before Edison's (1879) "breakthrough". Edison actually sued Swan for patent infringement and the matter was finally settled out of court when the rivals formed the Edison and Swan United Electric Company.

Although Swan got there first, at the time, the only source of domestic electrical energy generally available was the battery and so all the lighting development took place using DC/battery power and it was Edison who popularised the invention in 1882 by providing the necessary electricity generating and distribution systems to power the lamps which made electric lighting practical. See Edison's generators.

Considering that Edison's name is almost synonymous with the invention of the light bulb it is perhaps surprising to note that in 1883 the US Patent Office ruled that a prior invention patented in 1878 by William Sawyer and Albon Man took precedence.

See also Tesla (1887)


Despite the unfortunate ending in 1874 of Edison's relationship with William Orton the head of Western Union, in 1877 Edison was hired once more by Orton to try to break Bell's patents on the telephone. Orton is quoted as saying that "Edison had a vacuum where his conscience ought to be". The battlefield was to be the telephone transmitter where Bell's design was inadequate but several others were already working on this. Edison provided an innovative design but it also used ideas developed by others and Edison's rights to these were only settled after litigation. He was paid over $100,000 for his solution by Western Union and this gave him the funding and the independence he needed to develop his creative talent. Bell's lawyers later successfully overturned Orton's main patent challenges to Bell's system although Edison's patents on the carbon microphone were upheld.


Edison, became known as the Wizard of Menlo Park, where he employed an army of engineers working on development projects and an aggressive team of lawyers. He made his first patent application in 1868 when he was 21 years old and over his lifetime he was granted 1,093 U.S. patents including 106 in 1882. In addition he also filed an estimated 500-600 unsuccessful or abandoned applications. This amounts to two successful patents per week during his most productive period and a patent application on average every eight working days over his long working lifetime of sixty years. Considering that three of these inventions, the light bulb, the phonograph and the movie projector for which he is famous each took several years of development, and at the same time he had a large company to run, you have ask your self how much Edison himself contributed to the patents which bear his name.


See also the Current wars.


Canadian author Peter McArthur is quoted as saying in 1901 "Every successful enterprise requires three men: a dreamer, a businessman and a son-of-a-bitch". The giants of the industry seem to embody all three of these characteristics at the same time.


The tale of the light bulb is a re-run of the disputes and dirty dealings around the invention of telegraphy by Morse and Edison, Bell's telephony,  Edison's carbon microphone and Bain's electric clock and Fax machine, stories and intrigues destined to be repeated with AC electrical power generation and distribution,  radio (Marconi),  radio and telephony (Pupin) and computers, and each new technology advance, though surprisingly the "invention" of the internet seems relatively free from such disputes and charlatans.


1879 Repeating the 1858 experiment of Plücker and Hittorf Sir William Crookes used a Geissler vacuum tube with an anode in the shape of a cross noticed that the cross cast a shadow on a Zzinc sulfide fluorescent coating on the end of the tube. He hypothesized that there must have been rays coming from the cathode which caused the Zinc sulfide to fluoresce and the cross to create a shadow. He called these rays cathode rays. Crookes tubes were used by Röntgen in 1895 to demonstrate X-rays and by J. J. Thomson in 1897 in his discovery of the electron.

Crookes also invented the radiometer which detects the presence of radiation. It consists of an evacuated glass bulb in which lightweight metal vanes are mounted on a low friction spindle. Each vane is polished on one side, and blackened on the other. In sunlight, or exposed to a source of infrared radiation (even the heat of a hand nearby can be enough), the vanes turn with no apparent motive power.

Crookes was a believer in the occult and in the 1870s claimed to have verified the authenticity of psychic phenomena. He was knighted by Queen Victoria who, it is rumoured, had similar interests.


1879 American physicist Edwin Herbert Hall discovered that when a solid material carrying an electric current is placed in a magnetic field perpendicular to the current, a transverse electric field is created within the current carrier. Known as the Hall Effect in his honour. The voltage drop across the conductor at right angles to the current is called the Hall Voltage and is proportional to the external magnetic field. The phenomenon is now used in sensors for measuring magnetic field strength as well as current.


1879 Siemens Halske demonstrate an electric railway at an exhibition in Berlin. Power was provided from a separate generator which supplied the train via a third rail. A similar system was built in 1883 to run a commercial service along Brighton promenade in the UK by the son of a German clockmaker, Magnus Volk, an electrical engineer who had already completed the electric lighting of Brighton Pavilion. It was the world's first publicly operated electric railway when it opened and with some modifications his trains are still carrying passengers along the promenade today.


1879 Austrian physicist Josef Stefan formulated a law which states that the radiant energy of a blackbody is proportional to the fourth power of its temperature.


1879 After five years working as music professor, Welsh born American David Edward Hughes resigned in 1855 to patent a printing telegraph which became very successful in the USA and most of Europe, except Great Britain, bringing him international honours. In 1879 he invented the induction balance, the basis of the metal detector. It consists of two coils, one transmitting a low frequency signal and one connected to a receiver (detector) arranged in such a way that the receiver coil is close to, but shielded from, the transmitter coil so that in free space it does not pick up (detect) any signals from the transmitter. When the coils are brought near to a metal object, small perturbations in the magnetic field upset the balance between the coils causing a current to flow in the receiving coil thus indicating the presence of the metallic object.


The same year while working on his induction balance he noticed a clicking in a separate home made telephone ear-piece which was not connected in any way to the induction balance. He diagnosed this to be caused by a loose wire in his induction balance since the clicking stopped when the wire was firmly connected. He deduced that invisible waves, which he called aerial transmissions and which would today be called radio waves, were being emitted from a spark gap which occurred when the wire in the transmitting coil of the induction balance became disconnected and that the ear piece was picking them up. Investigating further he devised a clockwork device for opening and closing the spark gap and was able to pick up signals from the spark gap with his telephone receiver over ever greater distances, up to 500 yards, walking up and down Great Portland Street in London. Effectively he made the world's first mobile phone call. In 1880 Hughes demonstrated the phenomenon of radio communications to the Royal Society in London but the president, mathematician William Spottiswoode was not impressed. According to George Gabriel Stokes, Irish mathematician and physicist specialising in hydraulics and optics, who witnessed the demonstration, the phenomenon was explained by induction not radio waves. Discouraged, Hughes passed on to other interests and did not pursue his discovery. Eight years later Hertz was credited with the discovery of radio waves.


1880 The brothers Pierre and Jacques Curie predicted and demonstrated piezoelectricity. See more about the quartz crystal and piezoelectricity.


1880 Emile Alphonse Fauré in France patented pasted plates for manufacturing Lead-acid batteries. The Lead plates were coated with a paste of Lead dioxide and sulphuric acid which greatly increased the capacity of the cells and reduced the formation time. This was a significant breakthrough which led directly to the industrial manufacture of Lead-acid batteries.


1880 Herman Hammesfahr, a German immigrant to the USA, was awarded a patent for a durable and flame retardant fibreglass cloth with the diameter and texture of silk fibres. He showed a glass dress at the 1893 Chicago World Fair. (Also attributed to American glass manufacturer Edward Drummond Libbey founder of Owens-Illinois).


1881 Improvements to the Leclanché cell, to avoid leakage, by encapsulating both the negative electrode and porous pot into a sealed Zinc cup were patented by J.A. Thiebaut.


1881 The first electric torch or flashlight patented by English inventors Ebenezer Burr and William Thomas Scott. The original lamps were designed as portable table lamps and powered by a wet cell battery in a waterproof box. At the time the first power station had not yet been commissioned and there were no households wired up for mains electricity. More convenient portable versions of the torch using the recently invented dry cells were introduced starting in 1883. They quickly became popular for bicycle and miners lamps.


1881 Lead acid rechargeable batteries were first used to power an electric car by M. G. Touvé in France.


1881 The first International Electric Congress or International Conference of Electricians convened in Paris to define the international terms for the electrical units of electromotive force (Volt), resistance (Ohm), aand current (Ampère). The Congress also specified the manner and conditions in which the units were to be measured. Up to this time there had been at least twelve different units of electromotive force, ten different units of current, and fifteen different units of resistance.

The standard Ohm was defined by the resistance of a specified column of Mercury, the standard Ampère by the current which deposits metallic Silver at a specified rate from a Silver nitrate solution and the standard Volt was defined by the EMF produced by an electrical circuit passing through an electrical field at a specified rate. However since most laboratories were not equipped to generate a standard Volt in the specified manner, and in any case they used batteries to provide their source of electric potential, a new voltage standard was devised, based on the EMF produced by a standard Clark cell and this was adopted at the fourth International Electric Congress in Chicago in 1893. Unfortunately with the three standards each based on independent measured quantities, Volts did not always equal Amps multiplied by Ohms and the voltage standard had to be changed once more. The 1908 International Congress in London consequently changed the Volt to a derived unit based on the standard Ampère and standard Ohm.


1881 American engineer Frederick Winslow Taylor working at the Midvale Steel company introduced Time and Motion Studies or Work Study and Method Studies to streamline manufacturing and eliminate unnecessary work. They enabled major efficiency savings to be made and became the foundation of Scientific Management.


1881 Patent granted to William Wiley Smith for the induction telegraph used to communicate with moving trains. Soon afterwards improved versions were invented independently by Lucius J. Phelps (1884), Edison (1885) and black American Granville T, Woods (1887). The system consisted of a track-side wire or rail which could pick up signals from an induction coil mounted on the train, essentially acting as the primary and secondary windings of a transformer. The forerunner of the mobile phone.

Similar systems, based on the same principle, were also used for fixed wireless communications before the discovery of radio (Hertzian) waves.


1882 French physicist and physician Jacques Arsène d'Arsonval invented the moving coil galvanometer. It had shaped pole pieces which enabled it to have a linear scale and became the basis of all modern electromechanical analogue panel meters.


1882 Nikola Tesla, working in Budapest, identified the rotating magnetic field principle and the following year used it to design a two-phase induction motor.


1882 French chemists Felix de Lalande and Georges Chaperon introduce the first battery to use alkaline electrolyte, the Lalande-Chaperon cell, the predecessor of the Nickel-Cadmium cell. Using electrodes of Zinc and Copper Oxide with a Potassium Hydroxide electrolyte, it was rechargeable and produced a voltage of 0.85 Volts.

Up to that point, all batteries had used acidic electrolytes. They chose to investigate alkaline rather than acidic electrolytes because electrodes of most metals and their compounds are attacked by the acid. Lead is one of the few metals resisting the acids but it is very heavy and a weight savings would be secured by using almost any other metal.


1882 English amateur scientist James Wimshurst invented the Wimshurst Electrostatic Generator, the first machine capable of generating high voltage static electricity that was unaffected by atmospheric humidity. Static electrical charges of opposite polarity built up on its two fourteen and a half inch (38 cm) contra-rotating discs sufficient to draw a four and a half inch (12 cm) spark. Since the breakdown voltage for air is 30,000 Volts per centimetre, this small table top machine was capable of generating over 300,000 Volts. As a reliable source of high voltage electricity, it not only provided a practical power source for X-ray machines, but it was a boon to Victorian experimenters enabling them to carry out serious scientific investigations or to carry out dubious experiments in electrotherapy. Wimshurst's basic design is still used in electrical laboratories today.


James Wimshurst was the son of British ship builder Henry Wimshurst who built the Archimedes, the world's first propeller driven steamship, using the screw propeller patented in 1836 by John Ericsson. The Archimedes was used to lay the first successful Atlantic cable in 1866, replacing the ill fated 1858 original cable.


1882 Ayrton and Perry in England build an electric tricycle with a range of 10 to 25 miles powered by a Lead acid battery and sporting electric lights for the first time. (Four years before the first Internal Combustion Engine car by Karl Benz)


1882 British engineer, James Atkinson patented modifications to the spark ignition, four stroke, internal combustion engine to circumvent Otto's patent. The design used a complex crankshaft arrangement to provide a longer exhaust (power) stroke than the induction stroke to improve the efficiency of heat cycle. The penalty was a more complicated mechanical mechanism as well as a larger, heavier engine. The industry however preferred the simpler Otto design and Atkinson's design did not achieve commercial success in his lifetime. Recently however the design is making a comeback as fuel efficiency becomes a priority.

See also Heat engines.


1882 In a display of optimism the first small domestic electrical appliances begin to appear, three months before power was available from the first electricity generating station. The electric fan, a two bladed desk fan was invented by Schuyler Skaats Wheeler manufactured by Crocker and Curtis electric motor company and the electric safety iron was invented by New Yorker Henry W. Seely.


1882 The world's first two large scale central electricity generating plants or power stations were completed by the Edison Electric Lighting Company. The first practical DC electrical power plants had already been installed in 1878, including a steam driven plant in Germany by Sigmund Schuckert and a hydroelectric scheme in the UK by Willam Armstrong. The first of Edison's generators to come on stream in April was at Holborn Viaduct in London providing DC Power for 2000 electric lamps. The second, in September, was at Pearl Street Station in New York City's financial district, supplying 82 customers with power for 400 lamps, increasing to 508 customers with 10,164 lamps two years later. Reciprocating, coal fired, Porter and Allen steam engines provided the motive power (about 900 horsepower) to 27 ton direct-current (DC) dynamos which produced 100 Kilowatts of power at 110 Volts. The overall energy efficiency is estimated at 6%.


Just 26 days after Edison's steam driven dynamos at Pearl Street were fired up, his hydroelectric dynamo at the Vulcan Street Plant on the Fox River at Appleton, Wisconsin also came on stream supplying 12.5 kW of DC electric power to a home and two paper mills owned by a Mr R.H. Rogers. It was the first hydroelectric scheme in the world to supply commercial customers.

The project was instigated by Rogers, who was president of the Appleton Paper and Pulp Company and also of the Appleton Gas Light Company, after he heard about Edison's plans for Pearl Street. Being a ready-made user as well as having a ready-made customer, Rogers persuaded a group of investors to join him in funding the project.

Edison's DC generator was driven by a 42 inch (1 m) diameter Elmer pivot-gate, hydraulic turbine fed from a ten foot (3 m) head and running at a speed of 72 rpm. It was designed and patented by an inventor named Elmer from Berlin, Wisconsin and manufactured by a local company, Morgan and Bassett. Edison obviously had a much better public relations team than Elmer since this development is forever associated with Edison and Elmer's important contribution to this landmark project has been almost forgotten with information about him being almost impossible to find.


Despite these early breakthroughs, Edison's DC distribution scheme lost out to Tesla and Westinghouse's more efficient AC distribution scheme in the War of the Currents.


1882 Young American engineer William Joseph Hammer testing light bulbs for Edison noted a faint blue glow around the one side of the filament in an evacuated bulb and a blackening of the wire and the bulb at the other side, a phenomenon which was first called Hammer's Phantom Shadow. In an attempt to keep the inside of the electric lamps free of soot he placed a metal plate inside the evacuated bulb and connected a wire to it. He noted the unidirectional or "one-way" current flow from the incandescent filament across the vacuum to the metal plate but he was unable to explain it or realise its significance at the time. It was in fact due to the thermionic emission of electrons (not discovered until 1897 by J.J Thomson) from the hot electrode of the filament, flowing to the cold electrode of the plate creating in effect a vacuum diode or valve. In 1884 Edison was awarded a patent for a device using this effect to monitor variations in the output from electrical generators. The indicator proved ineffective however Hammer's discovery of thermionics was henceforth known as the Edison effect. The Edison effect is the basis of all the vacuum tube devices and thus the foundation of the electronics industry in the early 20th century. The first practical vacuum tube diode was patented by Fleming in 1904.


1882 English engineer John Hopkinson, an advocate of DC electricity generation and distribution systems, patented a three-wire, DC transmission system to supply two independent loads or a single load with double the voltage. (Not to be confused with Tesla, Dobrovolski, Wenström's later three phase AC distribution system which was carried on four wires). Hopkinson's system used one common wire between both circuits which enabled two, two-wire circuits to be supplied from a single generator with an output voltage of double the voltage of the individual circuits. He demonstrated the principle of load balancing to minimise the current flowing in the common, neutral line when two different DC motor loads are connected to the two parallel arms of his system. The system saved between 25 and 50 percent of the copper required for the conductors, depending on the balance between the loads. As in many other cases, Edison claimed the invention to be his.

In 1884 Hopkinson also published a paper analysing the synchronisation of parallel AC generators connected to similar systems. See more about Maintaining Grid Frequency Stability.


Hopkinson died tragically at the age of 49, together with three of his six children, in a mountaineering accident in Switzerland.


1883 Edison patents the fuse.


1883 Charles Edgar Fritts an American inventor built the first practical PhotoVoltaic module by coating selenium wafers with an ultra thin, almost transparent layer of Gold. The energy conversion efficiency of these early devices was less than 1%. Denounced as a fraud in the USA for "generating power without consuming matter, thus violating the laws of physics" the idea of solar cells was taken up and commercialised by Siemens in Germany.


1883 Irish physicist George Francis FitzGerald suggests that Maxwell's theory of electromagnetic waves indicates that radio waves can be produced by an oscillating electric current.


1884 In an attempt to simplify Maxwell's Equations British engineer, physicist and mathematician Oliver Heaviside developed the branch of mathematics known as vector calculus. Maxwell expressed his theory with a cumbersome series of 20 partial differential equations with 20 variables representing the electric and magnetic fields. The equations for the fields were dependent on the coordinate system used. In each of cartesian, polar or spherical coordinate systems, three different equations were needed to represent the three possible components of the field directions. Heaviside defined the new vector operators, GRAD, DIV and CURL which enabled him to rewrite Maxwell's equations in vector notation, in a form which is independent of the coordinate system, with only four equations with four variables. Maxwell's equations are now, normally presented in the form developed by Heaviside.


Heaviside contributed much to communications theory but sadly remained unrecognised in his lifetime. In 1880 he patented the coaxial cable. In 1887 he investigated the causes of distortion in transmission lines showing mathematically that it was due to the distributed capacitance along the line, and more importantly, that it could be corrected or reduced by adding distributed inductance along the line. His suggestion to install induction coils at intervals along transmission lines was turned down by William Preece the assistant chief of the British Post Office who controlled the lines and it was published without fanfare in "The Electrician". The idea however was taken up in America by AT&T and by Michael Pupin a Columbia University lecturer in mathematical physics. Pupin subsequently patented the idea of inductive loading coils in 1899 and "Pupin coils" were implemented by AT&T throughout their network enabling them to increase dramatically the range of their telegraph and telephone cables. The patent made him extremely wealthy, much to Heaviside's chagrin, not so much for the money, which was never important to him, but for the recognition which he felt he deserved. While initially acknowledging Heaviside's contribution, Pupin changed his stance when the value of his patent became clear. His autobiography, "From Immigrant to Inventor", an example of the American dream, won him a Pulitzer Prize. In it, he rubs salt into Heaviside's wounds by mockingly crediting inspiration for "his invention" to a herdsman from his native Serbia who showed him how to send sound signals by tapping on the ground.

Heaviside is remembered today more for his 1902 prediction, published in the Encyclopaedia Brittanica, of the ionised layer in the upper atmosphere which reflected radio waves making long distance radio transmission possible by bending the radio wave around the curvature of the Earth. Known as the Heavisde Layer, or the Kennelly-Heaviside Layer since Arthur Edwin Kennelly an expatriate Briton working in the USA also independently made the same prediction at the same time, its existence was verified in 1924 by Edward Victor Appleton.


Heaviside's life was not a happy one. He was not a wealthy man and worked much of his life with no regular income. His mathematics were difficult to understand even by the most technically literate and the injustice of Pupin's exploitation of his ideas affected him greatly. An embittered man, he never married, living an eccentric existence in bare rooms furnished with granite blocks. In later life his appearance became more and more unkempt and children would taunt him in the street, shouting "Poop. Poop. Pupin"!


1884 British engineer, Charles Algernon Parsons, graduated apprentice of William Armstrong, produced his first steam turbine in 1884. Coupled to a dynamo of his own design it generated 7.5 kW of electricity, but failed to generate any commercial interest in it. One of his key innovations was the compound reaction turbine which used a set of stator blades to redirect the steam after it had passed through the first rotor blades so that it could be directed through a second rotor and hence to further rotor/stator pairs. This allowed much higher power outputs and efficiencies to be achieved. See photographs of Parsons turbine showing the blades.

To publicise his invention, in 1894 he took out a patent on the turbine and commissioned a 100 foot long steel boat, the Turbinia, to demonstrate its capability. Initially he did not achieve the desired speed through the water as its propellers, rotating at 18,000 rpm, suffered from the previously unheard of problem of cavitation and churned up the water as bubbles formed behind the blades due to sudden pressure reduction. However by slowing down the turbine and modifying the propellers he was able to achieve a speed of 34.5 knots from a 2,300 hp turbine. Still his target customer, the Admiralty, was unimpressed. According to Parson's biographer Ken Smith, Parsons dictum was "If you believe in a principle, never damage it with a poor impression. You must go all the way". His opportunity came at the 1897 Spithead Naval Review of 160 of the British navy's ships, arranged to show off the might of the Royal Navy to Queen Victoria and invited foreign dignitaries on the sixtieth anniversary of the queen's accession to the throne. The navy's best boats were capable of no more than 30 knots and the Turbinia astonished the gathered crowd by steaming up and down the navy's lines leaving their fastest boats in her wake. The steam turbine's future was assured. Today 86% of the world's electricity is generated using steam turbines.


See more about Steam Turbines and how they work.

See more about Steam Engines

.

1884 Charles Renard uses a Zinc/Chlorine Flow Battery to power his air ship La France with the chlorine being supplied by an on board chemical reactor containing Chromium Trioxide and Hydrochloric Acid.


1884 Swedish chemist Svante August Arrhenius working at the University of Uppsala published his PhD thesis on the Galvanic Conductivity of Electrolytes explaining the process by which some compounds conduct electricity when in solution. He proposed that when a compound like table salt NaCl (Sodium chloride) was dissolved in water, it dissociated into positively and negatively charged "ions" (Greek for "the ones that move" or "wanderers") Na+ and Cl- whose motions constituted a current. These ions drift freely through the solution but when positive and negative electrodes are introduced into the electrolyte, as in electrolysis, the ions drift towards the electrode of opposite polarity. He defined acids as any substance, which when dissolved in water, tends to increase the amount of H+ Hydrogen ions and bases as any substance, which when dissolved in water, tends to increase the amount of OH- hydroxide ions. (These definitions do not cover all possibilities which are now known to exist).

His 1884 thesis was treated with disbelief and was given the lowest passing grade at the time, however he was vindicated with the discovery of the electron by J J Thomson in 1897 and his disparaged thesis won him the Nobel Prize for chemistry in 1903.


In 1887 Arrhenius was the first to develop the theory quantifying the rate at which chemical reactions proceed, now known as Arrhenius Law.


In 1896 Arrhenius was also the first to describe the "Greenhouse Effect" and its causes.


1884 French chemist Henri Louis Le Chatelier discovered the chemical equivalent of Lenz Law of electromagnetism. It was published in simpler form 4 years later as: "If the conditions of a system, initially at equilibrium, are changed, the equilibrium will shift in such a direction as to tend to restore the original conditions". The conditions refer to concentration, temperature and pressure. Le Chatelier's Principle allows you to predict which way the equilibrium will move when you change the reaction conditions, and helps provide ways to increase the yield in a chemical reaction.


1884 German engineering student Paul Gottlieb Nipkow patents an electromechanical image scanning system the basis for television raster scanning. The system was made possible by use of the photoconductive properties of the element selenium recently discovered by Fritts. Previous attempts at transmitting images such as Redmond's had used one channel, or pair of wires, to transmit each picture element. Nipkow's design needed only one pair of wires for transmitting the image. He used a rotating disk with holes, through which the scene could be observed, arranged circumferentially around the disc in a spiral between the centre and the edge. Light passing through the holes as the disk rotated, impinged on a selenium photocell, generating an electrical signal proportional to the brightness of the scene which could be transmitted down wires to a receiver. As the disk rotated it produced a rectangular scanning pattern or raster which scanned the scene. The number of scanned lines was equal to the number of holes and each rotation of the disk produced a television frame. A similar Nipkow disc, synchronised with the transmitter disc, was used in the receiver and the received electrical signal was used to to vary the brightness of a light source illuminating a projection screen. The light passing through the rotating disk formed a raster on the projection screen allowing an image to be built up. Like all television systems, it depended on the principle of "persistence of vision" and rapid scanning was needed to ensure that it worked. This was the first example of transmitting moving images electrically down a wire however it is not clear whether Nipkow actually built a working system. The signals from the selenium were very low and needed amplification for a practical system and it was not until 1907 that De Forest's audion made this possible.


1885 German physicist Eugen Goldstein using a cathode ray tube with a perforated cathode discovered rays of positively charged particles emerging from holes on the sides of the cathode and moving in the opposite direction of cathode rays. He called these rays Canal rays. The particles were later determined by Wien to be protons with a mass almost 2000 times the mass of an electron.


1885 Italian physicist Galileo Ferraris discovered the rotating magnetic field that he applied to the first 4 pole induction motor. He did not patent his invention but offered it freely to "the service of mankind". In 1888 he published a paper describing an electrical alternator and around the same time a similar device was patented by Tesla.


1885 Russian Nikolai Benardos and Polish Stanislav Olszewski were granted a patent for an electric arc welder with a carbon electrode. They are considered the inventors of modern welding apparatus although electric arc welding was first proposed by Lindsay fifty years earlier in 183.


1885 Engineers from the Ganz factory in Hungary, Ottó Titusz Bláthy, Miksa Déri and Károly Zipernowsky demonstrated at the National Exhibition in Budapest, a high voltage alternating current distribution system using toroidal transformers which they also designed. The entire exhibition was illuminated by 1,067 X 100 Volt incandescent lamps supplied by 75 transformers taking their power from a 1,350 Volt 70 Hz distribution system.

In modern day power transformers the windings are usually wound around a laminated Iron (Silicon steel) core (either directly or on a former). The Ganz transformers at the time provided a breakthrough in efficiency because of their unique construction which improved the transformer's magnetic circuit. The primary and secondary windings were first wound together in the shape of an annular ring and this formed the core of a torus. The magnetic circuit was made by toroidally winding thousands of turns of iron wire around the copper windings, completely encasing them in magnetic material which almost filled the inner space of the ring.

Bláthy also patented the first alternating-current kilowatt-hour meter in 1889.


1885 German mechanical engineer, Karl Friedrich Benz designed and built the world's first practical automobile to be powered by an internal combustion engine. It was a "three wheeler", powered by a water cooled 958cc, 0.75hp four stroke engine based on Nicolaus Otto's patent with electric ignition and differential gears. He was granted a patent for the gasoline fuelled "motor carriage" the following year and built his first four wheeled car in 1891. His invention marked the start of the slow demise of the battery driven car.


1886 After Bláthy's demonstrations of alternating current power distribution the previous year, New Yorker, William Stanley Jr in the USA patented the "Induction Coil", invented by Michael Faraday in 1831, what we would now call a transformer. This opened the door to the widespread use of AC power for domestic applications. Battery power, once the only source of electricity in the home, now had a serious competitor.


1886 Carl Gassner of Mainz patented the Carbon-Zinc dry cell which made batteries the convenient power source they are today. It used the basic Leclanché (1868) cell chemistry with Zinc as its primary ingredient with the chemicals being encased in a sealed Zinc container which acted as the negative electrode. A Carbon rod immersed in a Manganese dioxide/Carbon black mixture served as the positive electrode. Initially the electrolyte was ammonium chloride soaked into the separator which was made of paper, but by adding Zinc chloride to the electrolyte the wasteful corrosion of Zinc when the cell was idle was reduced - adding considerably to the shelf life. A bitumen seal prevented leakage. Although the technology has been refined by over a century of development, the concepts and chemistry are the same as Gassner's first cells.

Previously most wet primary cells could be recharged mechanically by replacing the spent chemicals. The used electrolyte could then be recycled to recover the basic constituents. The advent of the dry cell marked the beginning of the single use, throwaway, primary cell since it was no longer easy or possible for the user to replace or replenish the active chemicals.


1886 Patent granted to American chemist Charles Martin Hall for the electrolytic process for extracting Aluminium from its bauxite ore, Aluminium oxide or alumina. His discovery was made in a laboratory he set up at home, using home made Bunsen batteries, shortly after finishing his undergraduate studies. The process was discovered simultaneously by French chemist Paul Héroult and is now called the Hall-Héroult process.

Aluminium is the most abundant metal and the third most abundant element in the Earth's crust but, because it is highly chemically reactive, it does not occur in nature as a free metal. Before Hall discovered a practical way of extracting it from its ore, Aluminium metal was extremely rare and cost more than Gold.

On an industrial scale the process uses enormous amounts of electricity, consequently Aluminium extraction plants are normally located close to the sources of cheap hydroelectric power.

Hall went on to found ALCOA, the Aluminium Company of America.

See also Héroult


1886 English inventor Herbert Akroyd Stuart built the first compression ignition engine which he patented in 1890. In subsequent patent disputes with Rudolf Diesel who patented a similar engine in 1893 Akroyd Smith's claim to priority was upheld.


1887 Kelvin patented the electrostatic voltmeter.


1887 The Michelson-Morley experiment to determine the properties of the luminiferous aether (also called the "ether") and their influence on the speed of light was carried out in Cleveland, Ohio by Albert A. Michelson professor of physics at the Case School of Applied Science, and Edward W. Morley professor of chemistry at Western Reserve University.

At the time, the results of their experiment did not confirm the prevailing theory or conventional wisdom and the experiment was judged to be a failure. It was later realised however that it was the theory which was faulty, not the experiment, and that they had discovered a most important physical phenomenon.


For 200 years after Boyle had shown that sound can not be transmitted through a vacuum, scientists had theorised that light must similarly require a medium to support the transmission of its wave motions. They called this medium the luminiferous aether (from the Greek "light bearing substance") and since light can be received from distant stars, they speculated that this aether must fill the Universe. Since light can travel through a vacuum, it was assumed that even a vacuum must also be filled with this mysterious aether and since material bodies can pass through it without obvious friction or drag it must have an unusual combination of properties. The Michelson-Morely experiment was set up to investigate these properties.


Since it was assumed that the aether permeated the entire Universe, it must therefore be fixed and the movement through this stationary aether of the Earth as it orbits around the Sun at a speed of over 67,500 mph (108,000 km/h) would be experienced by observers on the Earth as an aether wind. This wind would either increase or decrease the speed of light depending on the direction of the Earth with respect to the direction of the wind.

It was reasoned that, the speed of light would be constant with respect to the proposed stationary aether, but if the Earth was moving with respect to the aether then that motion could be detected by comparing the speed of light in the direction of the Earth's motion and the speed of light at right angles to the Earth's motion.


Assuming the speed of light with respect to the stationary aether is c, light travelling perpendicular to the direction of the aether wind will also be c. However with an aether wind speed of v, we would expect the speed of light travelling in the same direction as the aether wind, but upstream against the wind, to be diminished from c to c-v while the speed of light travelling downstream with the wind would be augmented from c to c+v.

Using the above assumptions, the expected average speed c' of the light beam travelling back and forth in the same direction as the aether wind will be slower at c' >= c (1- v2/c2). This is because the time gained from travelling downwind is less than the time lost travelling upwind. Since v is very much smaller than c, the speed difference between the two perpendicular paths will also be very small and difficult to measure.


Michelson had the solution to this problem. In 1881 he had constructed an interferometer, a sensitive optical device that compares the optical path lengths for light moving in two mutually perpendicular directions. See diagram of Michelson's Interferometer and an explanation of its functions.


The Michelson-Morley experiment used an adapted version of Michelson's interferometer and expected to find interference fringes due to the different light speeds, and the consequent different transit times over equal distances, of light beams travelling in the same direction of the aether wind compared with light beams travelling perpendicular to the aether wind. They mounted their apparatus on top of a large block of sandstone about a foot (30 cms) thick and five feet (1.5 M) square, and floated it in a circular trough of Mercury to minimise vibrations and to allow the set up to be rotated so that measurements could be taken with respect to any angle of the Earth's direction through space. One arm or light path was aligned with the direction of travel as the Earth, the other perpendicular to it.


To their chagrin, no difference in the light speeds between the paths was found and they initially believed that they had failed. Despite repeating the measurements many times over as well as making modifications and improvements to their equipment, still no difference in light speeds between the two directions was found.


Attempts to explain this puzzling result were made in 1889 by Irish physicist George Fitzgerald and by Dutch physicist Hendrik Lorentz in 1892. They hypothesised that the aether did exist and that the pressure of the assumed aether wind on the arm of the interferometer in line with the wind would cause the arm to contract and this shorter distance could compensate exactly for the supposed slower speed of light against the wind along this arm. This would in turn explain the lack of destructive interference between the light travelling along the interferomter's two perpendicular paths. According to their calculations the length L of the arm in the direction of the aether wind would contract by an amount equal to equal to L/γ where γ (gamma) = 1/√(1-v2/c2) and v is the difference between the speed of light in the two perpendicular interferometer arms. Simultaneously, the time for light t to traverse both ways along the interferometer arm in the direction of the aether wind would increase to . The correction factor gamma γ was later named the Lorentz transformation in honour of Lorentz.

Unfortunately their hypothesis was proved to be incorrect and the subsequent measurements have all confirmed the absence of a luminiferous aether.


The momentous conclusion was that the luminiferous aether did not exist and that the speed of light in a vacuum is the same for all observers, regardless of the motion of the light source. By inference from Maxwell's recently (1873) formulated laws, the same conclusions also apply to electromagnetic radiation.

The Michelson-Morley experiment ultimately led to the proposal by Albert Einstein in 1905 that the speed of light is a universal constant.


Michelson was awarded the Nobel Prize in 1907, becoming the first American to win the Nobel Prize in Physics.


1887 Arrhenius publishes the equation named after him showing the exponential relationship between the rate at which a chemical action proceeds and its temperature, the rate doubling with each 10°C rise in temperature.


1887 American inventor Elihu Thomson patents the electric welding (resistance welding) process. The technique used for making battery interconnections.


1887 By 1887 huge strides had been made in the electrical power industry since the invention of the first practical dynamo 20 years earlier.


1887 - 1890 Croatian-born physicist Nikola Tesla filed for numerous US patents on AC distribution systems and polyphase induction motors and generators based on the polyphase rotating field principle he discovered in 1882. This enabled inexpensive and unlimited electric power to be brought to the home consumer thus sealing the fate of the DC system and the use of DC in domestic applications.

Contracted for $50,000 by Thomas Edison (a promoter of DC transmission) to improve his DC dynamos Tesla worked night and day to deliver the solutions on time a year later to Edison but Edison refused to pay, saying he had been a joking about the contract. Tesla resigned in disgust and went to work for George Westinghouse promoter of AC distribution and Edison's arch rival. Edison with some success, spent the rest of his life trying to undermine Tesla.


For two years after Tesla left, Edison staged a morbid public relations campaign in what became known as the notorious current wars to demonstrate that the Westinghouse AC distribution system was dangerous by promoting the AC powered electric chair for carrying out the death penalty and calling such executions "Westinghousing". At the same time he arranged public executions of farm animals which he attended personally in the courtyard of his laboratory using AC power, starting with dogs and escalating to calves then horses. (The original electric chair using high voltage and direct current (DC) as a means of humane execution had been invented in 1881 by New York steam-boat engineer, dentist Alfred P. Southwick.)

Edison's system itself was responsible for a number of deaths due to mechanical failure or ignorance as the deceptively similar high voltage wires were installed overhead near to the more familiar low voltage telegraph wires.


In 1915 Reuters and the New york Times carried reports that Tesla and Edison were to share the Nobel Prize for physics. Mystery surrounds what happened next, but no such prize was awarded and it is claimed that Edison, whose fame and wealth were secure, turned down the award in order to deprive Tesla of a much needed $20,000. Others claim Tesla himself turned it down not wanting to be associated with Edison whom he called "a mere inventor". The Nobel Foundation did not deny that Tesla and Edison had been their first choices.


Despite having over 800 patents Tesla died penniless.


1887 British engineer, born in Liverpool, with the distinctly un-British name of Sebastian Pietro Innocenzo Adhemar Ziani de Ferranti, (his father was a photographer and his grandfather, Guitarist to the King of Belgians), designed the generation and distribution systems for Deptford Power Station (1887-1890), which at that time was the largest in the world. Power was supplied by four single phase 1000 kW, 10,000 Volts, 85 cycle/sec alternators. Ferranti pioneered the use of Alternating Current for the distribution of electrical power in Europe authoring 176 patents on the alternator, high-tension cables, insulation, circuit breakers, transformers and turbines.

Ferranti also designed the first flexible high voltage cables for power distribution using wax-impregnated paper for insulation, a technique which was used exclusively until synthetic materials became available.


In the same year Ferranti also patented the induction furnace in which materials are heated by eddy currents induced within the material itself, generated by placing the material in the magnetic field of an induction coil. (Now used for domestic cooking hobs).


1887 British physiologist Augustus Waller of St. Mary's Medical School in London published the first human electrocardiogram - recorded by lab technician Thomas Goswell.


1887 Fibreglass invented again by Charles Vernon Boys a physics demonstrator at the London's Royal College of Science who produced glass fibre strands by using the end of an arrow fired from a miniature crossbow to draw strands of molten glass from a heated vessel.


1887 German physicist Heinrich Rudolf Hertz discovered the photoelectric effect, that physical materials emit charged particles (electrons) when they absorb radiant energy. During electromagnetic wave experiments he noticed that a spark would jump more readily between two electrically charged spheres when there surfaces were illuminated by the light from the other spark. Light shining on their surfaces seemed to facilitate the escape of electrons.

The photoelectric effect was not explained until 190 by Albert Einstein who used quantum theory proposed in 1900 by Max Planck.


1888 Heinrich Hertz is generally considered to be the first to transmit and receive radio waves. (But see also Hughes 1880). Hertz demonstrated the existence of electromagnetic waves, predicted by Maxwell in 1864 and justified theoretically by him in 1873, by transmitting an electrical disturbance between two unconnected spark gaps situated 1.5 metres apart. He set up a wire loop containing spark gap (the transmitter) through which a large spark was deliberately generated. This caused a small spark to jump across another spark gap (the detector) at the ends of a similar wire loop situated near to but not connected to the transmitting loop. The wire loops were effectively the world's first radio transmitting and receiving antennas.

He showed that radio waves travel in straight lines and can be reflected by a metal sheet.


Hertz died of a brain tumour at the age of 36 without ever seeing the practical applications which resulted from his discoveries. The unit of frequency is named the Hertz in his honour.


Like Hughes who discovered the phenomenon before him, Hertz failed to see the potential of radio for communications. Hertz told one of his pupils " I don't see any useful purpose for this mysterious, invisible electromagnetic energy".


Hertz' (or should we say "Maxwell's") radio waves now form the basis of all broadcast radio and television, radar, satellite navigation, mobile phones and much of the backbone of the world's communications systems. Maxwell provided the theoretical basis for the technology and Hertz showed it was possible but there were many, many worthy contributors whose inventions were needed to make it happen. Each country had its national champions who invented transmitters, receivers, antennas, tuners, detectors, filters, oscillators, amplifiers, transducers, displays, batteries and other components and a variety of coding, modulation, multiplexing, compression, encryption schemes, communications protocols and software. There were however five players associated with the fundamental developments in radio technology whose contrasting fortunes are worth mentioning briefly here namely: Marconi, Fessenden, Armstrong, Watson-Watt and Dippy.


See more about Electromagnetic Radiation and Radio Waves today.


1888 German physicist Wilhelm Ludwig Franz Hallwachs discovers another example of photoelectric emission. (Becquerel's was the first). Following up Hertz' experiments on how light affected the intensity of spark discharges, he noticed that the charge on an insulated, negatively charged plate leaked away slowly but when it was illuminated with ultraviolet light the charge leaked away very quickly. On the other hand a positively charged plate was unaffected by the light. This phenomenon, now known as the Hallwachs effect, was later explained to be due to the emission of electrons from certain metallic substances when exposed to light. It is the basis of the modern photocell. Note that this is different from the photovoltaic effect in solar cells.


1888 Spanish naval officer Isaac Peral built the first electrically powered submarine.


Later the same year the French launched Gymnôte, a 60 foot submarine designed by Gustave Zede. It was driven by a 55 horse power electric motor, originally powered by 564 Lalande Chaperon alkaline cells by Coumelin, Desmazures et Baillache with a total capacity of 400 Amphours weighing 11 tons and delivering a maximum current of 166 Amps. These batteries were replaced in 1891 by 204 Laurent-Cely Lead acid cells, which were in turn replaced in 1897. Although the batteries were rechargeable, they could not be charged at sea.


An electric submarine was also built by Polish inventor Stefan Drzewiecki for the Russian Tzar in 1884.


1888 Austrian botanist Friedrich Reinitzer investigating the behaviour of cholesterol in plants observed cholesteryl benzoate changing into its liquid crystal state, nine years before the invention of the CRT. For nearly a hundred years afterwards liquid crystals remained little more than a chemical curiosity until they were eventually adopted for use in LCD displays. See Dreyer (1950) and Fergason (1969).


1888 An irate Kansas City undertaker Almon B. Strowger patented the automatic telephone exchange.


When Alexander Bell first started selling telephones, he sold them in pairs because the few subscribers that there were at the time could connect to eachother directly. As the number of telephones grew, the need quickly arose to be able to connect to more than one subscriber, but running telephone lines from each subscriber to every other subscriber was impractical so the telephone exchange with a manual switchboard was born. Each subscriber was connected to a switchboard at the exchange. When a subscriber wanted to make a call he would call the exchange and the telephone operator would connect his line to the called party line via a cable on the switchboard to complete the circuit. Strowger was infuriated by this system, since there was another undertaker in town who happened to be friends with the telephone operator and whenever someone called the operator asking to be put through to an undertaker, all the calls went to his competitor. He therefore set about designing an automatic exchange that would eliminate the need for operators.


In Strowger's design the telephone dial sent a series of pulses corresponding to each digit of the telephone number. At the telephone exchange the dial pulses would step a 10 position, rotary selector switch, called a uniselector, to a telephone line corresponding to the digit. For multi-digit telephone numbers, each line of the uniselector corresponding to the first digit was connected to a second uniselector, so that 100 lines could be accessed with 11 uniselectors. By adding a third stage, with 100 more uniselectors, 1000 subscribers could be accessed. In practice the uniselectors were designed as two-motion selectors with two dialling stages in one bank making 100 possible connections. The first stage was a rotary movement and the second stage was a linear movement with the selector stage moving up and down to connect to a set of contacts arranged vertically. This system formed the backbone of telephone communications in many countries of the world for almost 100 years.

Interestingly, before the familiar rotary telephone dial was invented, Strowger's first telephone sets used push button dialling, which required the caller to provide the pulses by tapping on the keys.


1888 AT&T engineer, Hammond V. Hayes developed the common battery system which permitted a central battery to supply all telephones on an exchange with power, rather than relying upon each subscriber's own troublesome power supply. It allowed all telephone signalling and speech to be powered from single large central 24 Volt lead acid batteries mounted in central telephone exchanges, eliminating the need for magnetos and Leclanché cells to be installed in every subscriber's premises. The system is still in use today.


1889 Elihu Thomson invents the motor driven recording wattmeter.


1889 Russian engineer Michail Osipovich Dolivo-Dobrovolski working for AEG in Germany made the first squirrel cage induction motor. In 1891 he demonstrated a complete end to end system with three phase electrical generators delivering power to three phase induction motors over a three phase electricity transmission system.


1889 America's first alternating current (AC) hydroelectric power generating station was put into service at Willamette Falls, Oregon. Using Westinghouse generators it was also America's first AC transmission system providing single phase power at 4000 Volts which was transmitted to Portland 14 miles away where it was stepped down to 50 Volts for distribution and used to power the street lights.


1889 Walther Hermann Nernst a German physical chemist applied the principles of thermodynamics to the chemical reactions proceeding in a battery. He formulated an equation (now called the Nernst Equation) for calculating the cell voltage taking into account the electrode potentials, the temperature and the concentrations of the active chemicals. It applies to the equilibrium position i.e. no current. This is a special case of the more general Gibbs free energy relationship and is one of the basic formulas used by cell designers to characterise the performance of the cell.

He also showed that in a reversible system the electrical work done is equal to the change in free energy. Also known as the enthalpy.

Nernst stated the Third Law of Thermodynamics that it is impossible to cool a body to absolute zero, when it would have zero entropy, by any finite process. In a closed system undergoing change, entropy is a measure of the amount of energy unavailable for useful work. At absolute zero, when all molecular motion ceases and order is assumed to be complete, entropy is zero.


1890 Dundee born engineer James Alfred Ewing discovers the phenomenon of hysteresis which he named after the Greek "hysteros" meaning "later". He observed that, when a permeable material like soft iron is magnetised by being subjected to an external magnetic field, the induced magnetisation tends to lag behind the magnetising force. If a field is applied to an initially unmagnetised sample and is then removed, the sample retains a residual magnetisation becoming a permanent magnet. He speculated that individual molecules act as magnets, resisting changes in magnetising potential and described the characteristic curve of the magnetic induction B versus the magnetic field H which caused it, calling it a hysteresis loop See diagram. Also known as the BH loop, it was later shown by Steinmetz that the area of the hysteresis loop is proportional to the energy expended in taking the system through a complete magnetisation - demagnetisation cycle. This wasted energy appears as heat and represents a considerable energy loss in alternating-current machines which are subject to cyclic magnetic fields. On the other hand, hysteresis is useful for creating permanent magnets or temporary magnetic memory, once the main method of providing computer Random Access Memory (RAM).

The hysteresis loop is the signature of a magnet. A slender loop indicates a good temporary magnet which has low hysteresis losses and responds readily to a small magnetic field. Temporary magnets (also known as soft magnets) are needed in magnetic circuits subject to cyclic field such as those found in motors, generators, transformers and inductors. A fat hysteresis loop indicates a permanent magnet, or hard magnet, which will remain magnetized after the application and withdrawal of a large magnetic field.

The term "hysteresis" is now used to describe any system in whose response depends not only on its current state, but also upon its past history.


1890 Tesla produced a muli-pole generator suitable for generating a high frequency carrier wave suitable for transmitting radio signals. It had 384 poles and produced a 10 kHz signal.


1891 German born, American mathematician and engineer Charles Proteus (Karl August) Steinmetz developed an empirical law for determining the magnitude of the losses due to the recently discovered phenomenon of magnetic hysteresis (see above) which he published in the magazine, "The Electrical Engineer".

The Hysteresis law for the loss of energy per magnetization cycle per unit volume "W" is given by Steinmetz's equation as:

W=ηBmaxx

where Bmax is the maximum flux density, η is the hysteresis coefficient or (a constant depending on the molecular structure and content of the material) and x is the Steinmetz exponent between 1.5 and 2.3, typically 1.6

Steinmetz also provided data on the magnetic characteristics of all magnetic materials then in current use.

As a rule of thumb, when the magnetic flux induced by the alternating current doubles, the hysteresis loss triples. The ability to predict the hysteresis losses for different materials and shapes enabled the design of more efficient machines, a process which had previously been trial and error.


In 1893 Steinmetz developed the phasor method using complex or imaginary number notation for representing the varying currents and voltages in AC circuits. This simple and practical method revolutionised the analysis of AC circuits.


Called the Wizard of Schenectady where he worked for General Electric, Steinmetz also carried out research on lightning phenomena. He was a prolific inventor with over 200 patents to his name including an electric car, the 1917 Dey electric roadster, for which he designed a compact double-rotor motor which was an integral part of the rear axle avoiding the need for a differential.


Steinmetz was physically handicapped with a deformed left leg, humped back, and diminutive stature, only four foot three inches (1.3M) tall, but he was compensated by a brilliant mind, congenial personality and infectious vitality. Raised in poverty, Steinmetz was a lifelong socialist whose early political activities brought him into conflict with the German authorities resulting in his flight from Germany. Throughout his life he applied his considerable energies to helping others.


1891 Another patent for the three-phase electric power generation and transmission system, this one granted to Jonas Wenström a Swedish engineer. His patent was disputed for many years by other claimants, including Tesla (1887), Dobrovolski (1889) and Hopkinson who patented the principle, as applied to DC power transmission, in 1882. It was finally confirmed in 1959, sixty eight years after Wenström died.


1891 American electrical engineer Harry Ward Leonard introduced the motor speed control system which bears his name. For almost a century, until the advent of thyristor controllers, it was the only practical way of providing a variable speed drive system from the fixed frequency mains electricity supply.


1891 Heinrich Hertz, with his Hungarian student Philipp Eduard Anton von Lenard, discovered that cathode rays could penetrate a thin Aluminium plate. Because gas could not pass through the foil they surmised that the cathode ray was a wave, publishing their results in 1894. In 1897 J.J. Thomson showed that cathode rays were streams of particles which he called corpuscles and which we now call electrons.

Lenard was awarded the Nobel Prize for Physics in 1905 for his work on cathode rays. He was a strong proponent of the German "Master Race" and became Adolf Hitler's advisor and Chief of "Deutsche Physik" or "Aryan Physics". He claimed that so called "English physics" had stolen their ideas from Germany and denounced Einstein's theory of relativity as a deliberately misleading Jewish fraud perpetrated by "Jewish physics". He was expelled from his post at Heidelberg University by the Allied occupation forces in 1945.


1891 One of the most important inventions in radio telegraphy, the coherer, was demonstrated at the French Academy of Science by physics professor from the Catholic University of Paris, Edouard Eugène Désiré Branly, and the results were published in La Lumière Èlectrique. In 1890 Branly rediscovered the coherer effect, that loose Iron or similar filings would coalesce under the influence of an electric or magnetic field dramatically reducing the resistance of a path through the material. Though he was not the first to notice the phenomenon, he was the first to see its potential for detecting radio waves. His device consisted of a small glass tube containing the filings or powder in series with a battery and a galvanometer for indicating changes in the current due to the presence of an electromagnetic field. It was much more sensitive than the spark detector used by Hertz enabling transmissions over much longer distances to be detected and for a decade it became the telegraph industry standard.

Branly's design was improved by Oliver Lodge who added a trembler which shook the filings loose for decohering between signal pulses, readying the device for detecting the next pulse. Unfortunately the coherer was only suitable for detecting the reception of a pulse of radio waves such as Morse code and could not be used for detecting the varying voice signals which, Fessenden showed, could be carried on a radio wave.


Contrary to legend, neither Branly's nor Lodge's coherer was used by Marconi for his first trans-Atlantic radio transmission in 1901. This pioneering communication needed a particularly sensitive detector and this was provided by an Iron-Mercury-Iron Coherer invented in 1899 by Indian physicist Sir Jagadish Chandra Bose of Presidency College, Calcutta. It was an example of an imperfect junction coherer which reset itself after receiving a pulse so there is no need for decohering.


On the basis of his coherer design Branly is revered in France as "The Father of Radio" and some text books even credit him with a Nobel prize for the invention. In fact Branly was nominated three times for the honour but he never actually won the prize.


Prior to Branly and the invention of radio, several others had investigated variations of the coherer effect observed when loosely compacted particles or lightly touching objects were subject to electrical or magnetic fields.

  • In 1866 English engineer Samuel Alfred Varley used the coherer effect in his invention of the lightning bridge for protecting telegraph circuits and their operators. The coherer, containing loosely packed Carbon granules in a wooden box, was connected in parallel to the telegraph equipment by a wire running from the telegraph line to the ground. Under normal circumstances, no electrical current could flow though the carbon granules because of their high resistance. But the high voltage between the line and the ground produced by a lightning strike caused the coherer to conduct providing a route for the lighting energy to flow to ground, thus bypassing and protecting the telegraph equipment.
  • In 1884. Italian school teacher Temistocle Calzecchi-Onesti observed that metal filings contained in an insulating tube will conduct an electrical current when influenced by electric or magnetic fields but this property disappears if the tube is shaken. He also noticed that Copper filings between two Copper plates had two resistance states - conducting when a high voltage was applied between the plates but and non-conducting for low voltages.

1891 German aviation pioneer Otto Lilienthal began a series of over 2000 experimental glider flights in gliders of his own design. Jumping from low hills near Berlin he was able to make flights as far as 820 feet (250 m) demonstrating that flying machines could be possible.

His gliders were similar to modern hang gliders but with a limited range of control made possible by the pilot changing the centre of gravity by shifting his body. Designs were based on Cayley's theories and his own observations of bird flight and he made both monoplane and biplane versions.

In 1889 He published a book Birdflight as the Basis of Aviation outlining his own theories and experiences of flight which has become an aviation classic.


Tragically, in 1896, at the age of 48 while piloting his regular glider he failed to recover from a stall and fell 49 feet (15 m) to the ground, dying from his injuries 36 hours later in hospital.

His last words were "Opfer müssen gebracht werden!" roughly translated as "Victims are necessary" or "Sacrifices must be made".


1891 Russian polymath Vladimir Shukhov patented the first thermal cracking method used in oil refineries for breaking down heavy hydrocarbons in petroleum to increase the percentage of the lighter more useful volatile products such as paraffin (kerosene) and petrol (gasolene). Previously paraffin had been separated in earlier times by the Han Chinese but in more modern times by Drake and others in a process of simple distillation. This gave a welcome boost to the use of the four stroke, spark ignition petrol engine invented by Otto in 1862.


1892 British born American chemist Edward Weston invented and patented the saturated Cadmium cell. Known as the Weston Standard Cell, it was adopted as the International Standard for electromotive force (EMF) in 1911 and was used as a calibration standard by the US National Bureau of Standards for almost a century. It had the advantages of being less temperature sensitive than the previous standard, the Latimer Clark Standard Cell which it replaced and of producing a voltage of 1.0183 Volt, conveniently near to one Volt. Similar to Clark's cell it used a Cadmium anode rather than Zinc.

He had revolutionised the electroplating industry in 1875 by replacing the batteries used to provide the current used in the plating process with dynamos which he designed and made himself and in 1886 he developed a practical precision, direct reading, portable instrument to accurately measure electrical current, a device which became the basis for the moving coil voltmeter, ammeter and watt meter.

A prolific inventor Weston held 334 patents.


1892 Eccentric Kentucky melon farmer Nathan B. Stubblefield "demonstrated" wireless telephony using a ground battery or earth battery (first proposed by Bain in 1841), for transmitting signals through the ground. Extravagant claims were made for the applications of the ground battery, from telephony and broadcasting to power generation, but they were never substantiated and Stubblefield, claiming he was swindled, died of starvation, an impoverished recluse. He is honoured in his hometown of Murray, Kentucky as "The Real Father of Radio".


1892 Dutch physicist Hendrik Antoon Lorentz formulates Lorentz Law, a fundamental equation in electrodynamics which gives the force F on a charged particle in an electromagnetic field as the sum of the electrical and magnetic components as follows:

F = qE + qv X B

Where q is the charge on the particle, v is its velocity, E is the electric field and B is the magnetic field. See Diagram of Lorentz Force.

This law describes the principles on which almost all electrical machines and electromechanical devices are based.


Lorentz law complements Maxwell's equations describing electric and magnetic fields and forms the basis of the theory of electrodynamics.

Lorentz developed a mathematical theory of the electron before their existence was proven for which he received the Nobel Prize in 1902


1893 Two German schoolmasters Johann Phillip Ludwig (Julius) Elster and Hans Friedrich Geitel discovered the sensitive photoelectric effect of alkaline metals such as Sodium or Potassium in vacuum tube at visible light spectrum. They later design the first practical photoelectric cell or "electric eye" which provides a voltage output which varies in relation to the intensity of light impinging upon it. They declined to patent their invention. The photoelectric effect is the basis of all electronic image tubes.


1893 Contract to supply hydroelectric generators to harness the power of Niagara Falls using Tesla's AC system awarded to Westinghouse, signalling the beginning of the end for DC generation and transmission, the end of the Current Wars and a triumph for Tesla. Rival Edison had lined up influential backers including J. P. Morgan, Lord Rothschild, John Jacob Astor IV, W. K. Vanderbilt and initially Lord Kelvin, a proponent of direct current, who headed an international commission to choose the system. After seeing Tesla's AC system which was used to light the 1893 World's Columbian Exposition at Chicago, Kelvin was converted to a be supporter of the AC system.

The system was completed in 1895 with three enormous 5,000 horsepower generators supplying 2,200 Volts for local consumption, stepped up to 11,000 Volts for transmission to Buffalo 22 miles away. The capacity was later increased to 50,000 horsepower with 10 generators and the transmission Voltage increased to 22,000 Volts for longer distance transmission.


1893 French born American railway engineer and aviation pioneer Octave Chanute organised an International Conference on Aerial Navigation at the World's Columbian Exposition in Chicago. He was an enthusiastic and influential promoter of aviation developments and in 1894 he published Progress in Flying Machines a survey of all published research into fixed-wing heavier-than-air aviation developments up to that date. The book became a bible for all would-be aviators at the time.


1893 German engineer Rudolf Christian Karl Diesel, born in Paris of Bavarian parents, published a paper entitled "Theorie und Konstruktion eines rationellen Wärmemotors zum Ersatz der Dampfmaschine und der heute bekannten Verbrennungsmotoren" - "Theory and Construction of a Rational Heat-engine to Replace the Steam Engine and Combustion Engines Known Today" in which he described his ideas for the compression ignition internal combustion engine, now known as the Diesel engine. The following year he applied for a patent for the engine. The German company Maschinenfabrik Augsburg Nürnberg AG (MAN) gave him the opportunity to test and develop his ideas.

At the request of the French Government who were looking for locally produced fuels for their African colonies, the Otto Company demonstrated at the Paris Exhibition in 1900, a small Diesel engine running on pea-nut oil, the first bio-diesel. Diesel himself also investigated and promoted the use of alternative fuels in his engines. Compression ignition engines using the Diesel cycle are today taking market share form the more popular spark ignition Otto cycle engines due to their superior efficiency.

Similar compression ignition engines had already been built in 1886 by English inventor Herbert Akroyd-Stuart for which he applied for a patent in 1890 entitled "Improvements in Engines Operated by the Explosion of Mixtures of Combustible Vapour or Gas and Air"

Diesel's inspiration was a modernised version of the ancient Chinese "Firestick" which was used as a cigarette or gas lighter. A piece of tinder was held in a glass tube containing a plunger. When the plunger was forced rapidly into the tube, as in a bicycle pump, the heat of compression would ignite the tinder.


On an apparently normal business trip from Belgium to attend, as guest of honour, the opening of a new Diesel engine factory in England in 1913, Diesel mysteriously disappeared from a cross Channel steamer. His body was recovered from the sea ten days later, but his death has never been satisfactorily explained. Speculation ranges from suicide, (He was thought to be in financial difficulties, though he was about to secure a new royalty stream), through accident, to assassination (On the verge of the First World War, agents of Imperial Germany possibly did not want him to allow the "Allies" access to his patents).

See also Heat engines.


1894 The first ever radio signal was sent 55 metres from one building to another in Oxford during the 1894 meeting at the British Association for the Advancement of Science about the work of Hertz who had died earlier that year. The lecture and demonstration were given by British physicist Oliver Joseph Lodge who arranged the transmission of the Morse code like signals which were transmitted by electrical engineer Alexander Muirhead and detected by Lodge using a modified Branly coherer rather than Hertz's spark gap. The sender used a telegraph key to send a pulse and the coherer in the receiver caused a bell to ring. It was just like a telegraph link but without the interconnecting wire. Lodge later formed a business partnership with Muirhead to commercialise a number of fundamental radio technology inventions which they had patented.

In 1911 they sold their patents, one of which was Lodge's patent for the tuned circuit to radio pioneer Guglielmo Marconi.

Lodge was knighted for his contribution to physics but much of his later life was devoted to his interest in the paranormal, "life after death" and spiritualism about which he wrote several books.


1895 German physicist Wilhelm Conrad Röntgen experimenting with a Crookes tube accidentally discovered X-rays, high frequency electromagnetic radiation, while investigating the glow from the cathode rays. He gave his preliminary report "Uber eine neue Art von Strahlen" to the president of the Wurzburg Physical-Medical Society, accompanied by experimental radiographs and by the image of his wife's hand. Within three years, every major medical institution in the world was using X-rays. Röntgen, who won the first Nobel prize in physics in 1901, declined to seek patents or proprietary claims on the use of X-rays.


Röntgen used a very high voltage to accelerate the electrons in a high speed electron beam and X-rays were produced when the beam was suddenly decelerated when it hit the target electrode. These rays had a continuous frequency spectrum and are now called bremsstrahlung radiation, or "braking radiation".

Characteristic X-rays on the other hand have a spectrum with definite energy levels which are produced when electrons make transitions between characteristic atomic energy levels in heavy elements.


X-ray technology is now widely used in materials science. See Bragg (1912)


1895 French physicist Pierre Curie discovered that for paramagnetic materials such as Aluminum or Platinum which become magnetised in a magnetic field but whose magnetism disappears when the field is removed, their magnetic coefficients of attraction vary in inverse proportion to the absolute temperature -- Curie's Law. He also showed that when ferromagnetic materials which tend to retain their magnetic properties, such as Iron and Nickel, are heated to an elevated temperature, above a characteristic temperature dependent on the material, now called the Curie point or Curie temperature they lose all of their magnetic properties.

The magnetic force associated with these materials is determined by the magnetic moment, a dipole moment within an atom which originates from the angular momentum and spin of electrons producing a tiny magnetic field. The magnetic fields are normally randomly oriented so that their overall fields cancel out, but small groups of atoms may be aligned with their fields in parallel reinforcing eachother in so called magnetic domains with a distinct magnetic orientation. When placed in a magnetic field, the orientation of these domains tends to line up in the direction of the applied field.

In the case of paramagnetic materials, the magnetic moment is quite feeble and the domains return to a random distribution once the external field is removed.

Ferromagnetic materials however have a much stronger magnetic moment and tend to retain their magnetic properties up to the Curie temperature (770 °C or 1,418 °F for iron) at which point the thermal agitation of the atoms causes the domains to become randomly oriented once more thus the material loses its overall magnetic moment. Conversely if ferromagnetic materials with no retained magnetic moment are heated to a temperature above the Curie point and allowed to cool in an external magnetic field, at the Curie point the magnetic domains will tend to line up spontaneously with the field and will retain their magnetic moment once the field is removed.

Curie also showed that there is no significant magnetic effect of temperature on diamagnetic materials such as Copper, Mercury and Gold.


1895 Alexandr Popov, an instructor at the Russian Imperial Navy's torpedo school, experimented with a variety of antennas (aerials) to capture electromagnetic radiation from lightning discharges. His receiver consisted of a coherer between an aerial wire connected to a tall mast and an earth (ground) wire connected to water pipes to detect the radiation, he successfully proved that the discharge emits electromagnetic waves. His experiment did not include a transmitter.

In 1890 he had repeated Hertz' experiments for the benefit of his students and in 1896, at a meeting of the Russian Physical-Chemical Society, he repeated Lodge's 1894 demonstration of radio signalling by sending the Morse coded message "Heinrich Hertz" over a radio link. Like Lodge, Popov was more interested in pursuing theoretical physics than in commercialising the idea, leaving the door open to the less technically competent but more commercially astute Marconi. (See following item). In later years the existence of these experiments was used to justify the claim by Popov's supporters that he was "The Father of Radio".


1896 Inspired by Hertz, 22 year old Italian Marchese Guglielmo Marconi, son of the Irish-born heiress to the Jameson whiskey fortune, was granted his first patent (in England) for radio telegraphy using Hertzian waves. This was claimed to be the first application of radio waves and the first to show that practical radio communications were possible. But Marconi had basically just patented the system demonstrated by Lodge two years earlier, and the principle of radio communications. Though he had been helped by William Preece, the Chief Engineer of the British Post Office, and his staff, Marconi himself added little to the system which was the radio equivalent of Morse's telegraph, which just switched the radio wave on and off in "dots and dashes", and did not carry voice signals. Because Marconi's "invention" was enclosed in a box, the patent office did not consider the technology to be in the public domain and so granted the patent. Lodge and Preece had been kept in the dark about the patent application and felt deceived.


It was Fessenden who first carried voices over the radio waves ten years later. Marconi was a great promoter, he developed transmitters, receivers and antennas and his telegraph systems were soon in use throughout the world, spanning the Atlantic in 1901, and earning him fame and fortune. He was awarded the Nobel prize for physics in 1909.


See also Wireless Wonders.


1896 American engineer William W. Jacques developed a Carbon battery producing electricity directly from coal. 100 cells with Carbon electrodes and alkaline electrolyte were placed on top of a coal fired furnace that kept the electrolyte temperature between 400-500 °C and air was injected into the electrolyte to react, he thought, with the carbon electrodes. The output was measured as 16 Amps at 90 Volts. Initially, Jacques claimed an 82 percent efficiency for his battery, but he had failed to account for the heat energy used in the furnace and the energy used to drive an air pump. The real efficiency was a meager 8 percent. Further research demonstrated that the current generated by his apparatus was not obtained through electrochemical action, but rather through thermoelectric action.


1896 Antoine Henri Becquerel discovered radioactivity when Uranium crystals wrapped in paper and left in a drawer with photographic plates created an image of the crystals on the plates. Radioactivity, we now know, is the spontaneous breakdown or decay of unstable atomic nuclei resulting in the emission of radiation which may be alpha particles (Helium nuclei), beta particles (electrons), gamma rays (high energy electromagnetic radiation), or the yet unknown nucleons (neutrons or protons resulting from spontaneous nuclear fission - a splitting of the atom). At the time however the nature of these mysterious rays was not known and it was several years before Rutherford and others were able to identify the content of the radiation.

Radioactivity can come from the decay of naturally occurring radioisotopes in a process now known as beta decay. Nuclear batteries are designed to make use of the radiated energy of certain radioactive isotopes by converting it into electrical energy.


Becquerel came from a distinguished family of scholars and scientists. His father, Alexandre-Edmond Becquerel, was a Professor of Applied Physics, discovered the photovoltaic effect and had done research on solar radiation and on phosphorescence, while his grandfather, Antoine César Becquerel, had been a Fellow of the Royal Society and invented a non polarising battery and an electrolytic method for extracting metals from their ores.


1896 In the USA, the flashlight or torch was invented by David Misell. The original versions were designed to attach to a tie or scarf and were sold by a Russian immigrant, Conrad Hubert in his novelty shop where Misell went to work. Although portable battery powered lamps had been in use in the UK since 1881 where they were patented by Burr and Scott, the first flashlight as we know it today introduced by Hubert in 1898. It was designed by Misell and was powered by a "D" cell which, with the light bulb and a rough brass reflector, was contained in a paper tube. Hubert went on to found Ever Ready and patents for subsequent flashlights although designed by Misell were awarded to Hubert.

The invention of the Tungsten filament lamp by Coolidge in 1910 greatly improved the performance of the torch which in turn created a growing market for batteries, popularising the "D" cell format we still use today.


1896 H. J. Dowsing patented the electric starter which he fitted to a modified Benz motor car purchased from maker Walter Arnold who made them under licence as the Arnold Sociable in East Peckham, Kent. Dowsing's starter consisted of a dynamotor, coupled to a flywheel, which acted as a dynamo to charge the battery and as a motor when needed to start the engine, an idea recently rediscovered as the integrated starter alternator (ISA). The first production electric self-starter was produced by Dechamps in Belgium in 1902.


1896 American astronomer, inventor, secretary of the Smithsonian Institution, professor Samuel Pierpoint Langley successfully launched a series of unmanned gliders to demonstrate the potential for controlled flight in a heavier than air machine. They were launched from a boat on the Potomac River and one of these flew over 4000 feet (1220 m) while another covered 5000 feet (1525 m). To test his theories and designs he constructed a version of Cayley's "whirling-arm apparatus" to measure the aerodynamic forces on models as they were propelled at high speed through the air. He investigated various wing profiles and showed that even a brass plate could be kept aloft if its speed through the air was high enough from which he concluded that a heavier than air machine would be viable.

Based on the success of his models, in 1898 Langley received a grant of $50,000 from the US War Department and a further $20,000 from the Smithsonian to develop a manned airplane, which he called an "aerodrome" (Greek - aeros "air" and dromos, "road" or "course").

To save weight, Langley's airplane had no landing gear so it was designed for catapult launching and landing over water. It had pitch and yaw control but no roll control and a 50 horsepower engine, more than four times the power of the engine used in the Wright Flyer. The airframe of the plane was very flimsy and the engine was very heavy, - too heavy. It was ready in 1903 but it made only two flights, one in on October 7 and one on December 8, both of which ended in crashes before the plane got airborne. In the second crash the plane broke up dumping the pilot in the Potomac leaving half of the plane still on the launching boat and the other half in the river. With the benefit of perfect hindsight, the army, who had paid for the plane and witnessed the tests, announced that the reason for the failure was that the propellers were too small.

The newspapers revelled in Langley's misfortune, particularly the New York Times. After the first crash of what they called Langley's "airship", they offered their opinion that it would be at least 1000 years before man could devise a flying machine, basing their prediction on the principles of evolution. After the second crash they advised Langley to give up and stick to his academic pursuits.

One week later, the Wright brothers made the first successful controlled flight of a heavier than air machine.


NASA's Langley Research Centre at Hampton, Virginia is named in Langley's honour.

Langley also invented the bolometer in 1878.


1897 British physicist Joseph John (J J) Thomson working at the Cavendish Laboratory in Cambridge investigating the affect of magnetic fields on cathode rays in a Crookes tube discovered the electron and calculated the ratio between its charge and its mass, the e/m ratio. He determined that they were identical particles no matter what metal had emitted them and that they were the universal carriers of electricity and a basic constituent of matter. He also calculated the velocity of the electron in the cathode ray to be 1/10 of the speed of light. He knew that the electrons were emitted by the atom but was unaware of their original distribution within the atom and assumed that they were randomly distributed within the mass of the atom like raisins in a cake or plums in a pudding. His model of the atom became known as the plum pudding model but was later shown to be incorrect.


J.J. Thomson was awarded the Nobel prize in 1906 for his studies on the conduction of electricity through gases and for the discovery of the electron and his pioneering work on the structure of the atom.

At the time there was great rivalry between German researchers who believed cathode rays to be waves and their British counterparts who believed them to be particles. In one of the greatest ironies of modern physics J.J. Thomson was awarded the Nobel Prize for showing that the electron is a particle, while his son, George Paget Thomson later received the Nobel prize for proving that the electron was in fact a wave.

Seven of Thomson's students went on to gain Nobel prizes in their own right.

Thomson died in 1940 and in his lifetime he never drove a car or travelled in an aeroplane. He had a passion for nature and said that if he had to live his life over again he would be a botanist.


Ever since Faraday published his work on the magnitude of the weights of the products of electrolysis in 1833, experimenters had postulated the idea that electric current was carried by corpuscles or particles but none had been able to isolate or describe such particles. By the late 1890s however, several other investigators working contemporaneously with Thomson had identified the charged particle we now call the electron and calculated the e/m ratio just as Thomson did in April 1897. These included Pieter Zeeman at the University of Leiden who in 1896 observed the spreading of spectral lines caused by the influence of a magnetic field and concluded that the light waves were produced by the movement of ions. The theory was superseded to take account of electron spin properties which were demonstrated by Stern and Gerlach in 1922. See diagram of the Zeeman effect. From the experiment he was able to calculate the e/m ratio. At the same time, each working independently with cathode rays, Emil Weichert at the University of Köningsburg, Walter Kaufmann at the University of Berlin and Philipp Lenard an assistant of Heinrich Hertz carrying on Hertz' experiments after his death, all published similar results for the value of the e/m ratio early in 1897. It was Thomson however who identified the electron as a sub atomic particle, while the others were hampered by trying to reconcile the evidence of a particle with the notion of the aether.

See also spectral ine spreading by the Stark effect.


In 1902 Zeeman shared the Nobel Prize for Physics (only the second time it had been awarded for physics) with his mentor Lorentz who had predicted the Zeeman effect.

History is kind to the winners of Nobel prizes. Once conferred, the other participants in the race are forgotten.


1897 The first oscilloscope using a cathode ray tube (CRT) scanning device was invented by the German scientist Karl Ferdinand Braun. He made many contributions to radio technology including antennas and detectors. He was awarded the Nobel prize with Marconi in 1909 for this work. During the First World War he was interned by the US government as an enemy alien and died before the war ended.


1897 Regenerative braking first used on a car to recharge the battery by M. A. Darracq in Paris.


1897 Russian mathematics teacher, Konstantin Eduardovich Tsiolkovsky, built a wind tunnel in his apartment which he used to explore aerodynamics and the drag characteristics of different shapes. During the same year he also developed the fundamental Theories of Rocket Motion which he published as "The Exploration of Cosmic Space by Means of Reaction Devices". In it he showed that a rocket's velocity is proportional to its effective exhaust velocity and he defined the Specific Impulse, which became the standard measure for comparing the energy produced by rocket engines and propellants.

He defined the Specific Impulse (I), expressed in seconds, as follows:

I = F / (dm/dt)

Where F is the rocket thrust in pounds and dm/dt is the propellant consumption in pounds per second.

Alternatively this can be written in terms of the rocket exhaust velocity ve as follows:

I = ve / g

Where g is the acceleration due to gravity (32 f/sec/sec)

He also showed that the change δv in velocity of a rocket as it consumes its fuel is given by:

δv = veLn(m0/m1)

Where ve is the exhaust velocity, m0 is the initial total mass, including propellant, m1 is the final total mass and ln is the natural logarithmic function.

This is known as the Tsiolkovsky Equation

It can also be expressed in terms of the Specific Impulse of the fuel as follows:

δv = I.g

With these relationships he was able to compare the effectiveness of different fuels, to calculate thrust and flight velocity as a function of fuel consumption and to show the influence of gravity during vertical ascents.

In 1903 he published a summary of studies he had carried out into liquid fuelled rockets and the optimum shape for rocket exhaust nozzles and he proposed the ideas of fuel pumping systems, regenerative cooling and directional control by means of rudders in the exhaust stream all of which were first successfully introduced on the German V-2 rocket forty years later.

In 1911 he confirmed the Earth's Escape Velocity to be 25,000 miles per hour and calculated the Orbital Velocity for Earth Satellites to be 17,800 miles per hour. See also Entering Space.

He had a keen interest in space travel and published many works on space stations and life support systems. He also developed the concept of the Multi-stage Rocket, which he called a "rocket train", to achieve higher velocity and range with the same initial vehicle weight, payload weight and propellant capacity or alternatively to carry a greater payload with a smaller initial weight. By jettisoning the propellant tanks and engines of the first stages once the propellant is used up, the later stages to not have to waste energy in accelerating a useless mass.

Tsiolkovsky's early works were the first academic studies on rocketry but unfortunately they were published in Russian and at the time they did not achieve a high circulation in the international scientific community. Despite his interest and the wide ranging scope of his contribution to the science, he never built any rockets.


See also Rocket propulsion


1897 German researcher W. Peukert discovered that the faster a battery is discharged the lower its available capacity, a phenomenon for which he developed the empirical law C = IT known as the Peukert Equation where "C" is the theoretical capacity of the battery expressed in amp hours, "I" is the current, "T" is time, and "n" is the Peukert Number, a constant for the given battery. A similar phenomenon occurs when a battery is charged. See also charging times for an explanation and a beer analogy.


1898 Danish telephone engineer Valdemar Poulsen patented the Telegraphone, the first magnetic recording and playback apparatus. It used a magnetised wire as the recording medium.


1898 The Proton discovered by German physicist Wilhelm Wien. Using an apparatus designed by Goldstein which generated canal rays of positively charged particles he determined that canal rays were streams of protons with mass equal to the mass of a Hydrogen atom. Rutherford later coined the word proton in 1917.


Wien also discovered the inverse relationship between the wavelength of the peak of the emission of a black body and its temperature now called Wien's Law. He was awarded the Nobel Prize in 1911 for his work on Black Body Radiation.


1898 Oliver Lodge patented the principle of tuned circuits which he called "syntonic tuning" for generating and selecting particular radio frequencies. This is the basis of selecting a single desired radio station from all those which are transmitting by tuning the receiver to the transmitter. Not only was this more efficient, it was fundamental to the orderly use of the radio spectrum and the establishment of practical radio communications systems which did not interfere with eachother.


1898 Pierre and Marie Sklodowska Curie discovered Radium named from the Latin "radius" meaning "ray" and Polonium which Marie named after her native Poland. With very limited resources, during the course of four years, the Curies refined 8 tonnes of waste pitchblende to produce 1 gram (0.04 ounces)of pure Radium Chloride. (It was not until 1911 that she was able to isolate pure Radium). Radium is over one million times more radioactive than the same mass of Uranium and one gram of Radium releases 4000 kilo joules (1.11 KWh) of energy per year. In 1900 they showed that beta rays and cathode rays are identical. Unaware at the time of the dangers of radiation in 1903 they both began to show signs of radiation sickness. Marie shared the 1903 Nobel Prize for Physics with her husband Pierre and Henri Becquerel for the investigation of radioactivity, a phenomenon which she named. In 1906 Pierre was unfortunately killed when he was run over by a horse drawn cart. Marie continued their investigations and in 1911 was awarded a second Nobel Prize, this time for Chemistry for her discovery of two new elements.

Despite her achievements and her two Nobel prizes, she was rejected by the French Academy of Sciences when a seat for a physicist became vacant. During her life she worked tirelessly for humanitarian causes and the use of X-rays and radioactivity in medical research, refusing to patent any of her ideas. She died of leukaemia caused by prolonged exposure to radioactivity. Her laboratory notebooks are still considered too radioactive to handle and photographic films, when placed between the pages, show the images of Madame Curie's radioactive fingerprints when developed. A year after her death, her daughter Irene won the family's third Nobel Prize.


1899 First patent on Nickel Cadmium rechargeable cells using alkaline chemistry taken out by Waldemar Jungner of Sweden. The first direct competitor to the Lead acid battery.


1899 The world land speed record of 68 mph was set by a Belgian built electric car, the "Jamais Contente", designed and driven by Camille Jénatzy. The first to exceed 100 kph, his cigar shaped car was powered by two 80 cell Fulmen Lead acid batteries supplying two twelve volt, 25 kilowatt motors, integral with the rear axle, driving the rear wheels directly.

Jénatzy, known as the Red Devil because of his red beard, was a famous racing driver at the time when racing was very dangerous, however his life was ended at his country estate rather than on the race track when, hosting a shooting party, he sneaked into the woods to imitate a roaring bear and was shot by one of his friends.


1899 Young German engineer Ferdinand Porsche, working at the Jacob Lohner Company, built the first Hybrid Electric Vehicle (HEV), a series hybrid, optimised for simplicity and efficiency. It used a petrol engine rotating at optimum, constant speed to drive a dynamo which charged a bank of batteries which in turn provided power to hub mounted electric motors in the front wheels. 300 Lohner Porsches were produced.


1899 Serbian immigrant Mihajlo (Michael) Idvorski Pupin filed for a patent (granted in 1900) for the Pupin inductive loading coils which are used to cancel out distortion due to the distributed capacitance in long transmission lines. The idea which was originally proposed, but not patented, in 1887 by Oliver Heaviside made Pupin very wealthy and destroyed Heaviside. Far from recognising his debt to Heaviside, he chose instead to belittle his contribution.


Not content with stealing Heaviside's ideas, Pupin played the same trick on Oliver Lodge who patented the tuned circuit for selecting radio waves in 1898. In his autobiography Pupin disingenuously claimed to have invented the tuned circuit in 1892 after being inspired by the way Serbian bagpipers tuned their pipes. Strangely Pupin did not patent the idea at the time but he did receive a patent or "Electrical transmission by resonance circuits" in 1900.


Pupin arrived in the United States as a young penniless immigrant. He studied at Columbia University where he made improvements to X-ray photography and radio wave detection eventually rising to be emeritus professor.


1899 Charles H. Duell Commissioner in the US Office of Patents announced "Everything that can be invented has been invented"


1899 Working at McGill University in Montreal on Becquerel's mysterious rays resulting from the spontaneous disintegration of the Uranium atom, New Zealand physicist Ernest Rutherford, assisted by English chemist Frederick Soddy, investigated further Becquerel's beta decay and discovered two kinds of "rays" (actually particles) emanating from the Uranium, one of which he called the alpha rays, could be absorbed by a sheet of writing paper. The other which he called beta rays was one hundred times more penetrating but could be stopped by a thin sheet of aluminium.

Meanwhile in 1900, French physicist Paul Ulrich Villard found that Radium emitted some far more penetrating radiation, which he named gamma rays. These rays could penetrate several feet of concrete.

Still undetected at that time were the neutral neutrons discovered in 1932 by British physicist Chadwick in experiments with light metals such as Beryllium and the neutron rays from the natural decay of Uranium discovered in 1940 by Russian physicists Flyorov and Petrzhak.

It was still some time before the properties of all these different rays could be determined.


  • By 1900 Becquerel succeeded in deflecting the beta rays with a magnetic field proving that the rays were in fact streams of charged particles. He also measured the e/m ratio of the particles which turned out to be close to that of cathode rays suggesting that the beta rays were in fact streams of electrons.
  • It was not until 1903 that Rutherford was able to deflect the alpha rays and it was 1905 before he could measure the e/m ratio. His results showed that the rays were in fact particles with the opposite charge from an electron. He concluded that if the charge on an alpha particle was the same as that on a Hydrogen ion, the mass of the alpha was approximately twice that of the hydrogen atom. In 1908, he finally established that the alpha particles were Helium atoms with two electrons missing, carrying a positive charge of + 2 e , and having mass four times that of the Hydrogen atom.
  • Gamma rays were not deflected by a magnetic field which showed them to be rays and not particles. They were found to be similar to X-rays, but with much shorter wavelength. This was not settled until 1914, when Rutherford observed them to be reflected from crystal surfaces.
  • Neutron "rays" were difficult to detect since neutrons carry no charge. The existence of neutrons resulting from the disintegration of Uranium was first noticed by Hahn and Strassmann in 1938 among the products of the induced fission of Uranium atoms when bombarded by other neutrons. Two years later Flyorov and Petrzhak confirmed that neutrons were also produced by spontaneous fission of Uranium.

More 1900 events - continued after "THEME"




THEME: The Development of Quantum Physics


See also the Standard Model of Particle Physics and the Timeline of Theories, Predictions and Discoveries


1900 German physicist Max Planck announced the basis of what is now known as quantum theory, that the energy emitted by a radiating body could only take on discrete values or quanta. Planck's concept of energy quanta conflicted fundamentally with all past classical physics theory and eventually gave birth to the particle theory of light as later explained by Albert Einstein. Although its importance was not recognised at the time, quantum theory created a revolution in physics. Planck was driven to introduce it strictly by the force of his logic; he was, as one historian put it, a reluctant revolutionary.

The energy E in a quantum of light, now called a photon, or resonator of frequency f is hf

Where h is a universal constant equal to 6.63 X 10-34 Joule seconds (Js), now called Planck's constant.

The relationship:   E=hf   is known as Planck's Law.

(Alternatively:   E=hc/λ   where λ is the wavelength of the radiation and c is the speed of light.)

Planck was awarded a Nobel prize in 1918 for his work on quantum theory.


See more about Photon Energy


Planck's personal life was a tragic one. His first wife died early leaving Planck with two sons and twin daughters. The elder son was killed in action in 1916 in the First World War. Both of his daughters died in childbirth. World War II brought further tragedy. Planck's house in Berlin containing his technical papers was completely destroyed by bombs in 1944. Far worse, his younger son died while being tortured by the Gestapo after being implicated in the attempt made on Hitler's life in 1944. Planck died in 1947 at the age of 88.


1905 Annus Mirabilis - Einstein's miraculous year. In those twelve months, 25 year old German born, Albert Einstein, working as a "technical expert third class" patent clerk at the Swiss Patent Office in Bern, shook the foundations of classical physics with five great papers that established him as the world's leading physicist.


  • Einstein first challenged the wave theory of light, suggesting that light could also be regarded as a collection of particles, now called photons whose energy is proportional to the frequency (colour) of the radiation. A photon of electromagnetic energy is considered to be a discrete particle with zero mass and no electric charge and having an indefinitely long lifetime. This helped Planck's revolutionary quantum theory to gain acceptance.
  • See also Hertz photoelectric effect (1887).


  • The second paper, Einstein's doctoral dissertation, shows how to calculate Avogadro's number and the size of molecules and surprisingly is Einstein's most cited work.

  • The third paper concerned the Brownian motion of small particles suspended in a liquid for which Einstein derived an equation for the mean free path of the particles as a function of the time. See also Brownian Motion (1827)

  • In his fourth paper "On the Electrodynamics of Moving Bodies", Einstein introduced for the first time, the concept of Special Relativity. He used it to explain inconsistencies which resulted when Maxwell's equations were used to describe the motions of moving magnets and also to explain the absence, as demonstrated by the Michelson-Morley experiment, of the so called luminiferous aether previously thought essential for the transmission of light waves.
  • It was based on two postulates:

    • The laws of physics are invariant (i.e., identical) in all inertial systems (i.e., frames of reference, such as space and time, moving at a constant speed relative to each other, and not subject to acceleration).
    • The speed of light in a vacuum is the same in all frames of reference, regardless of the motion of the light source or the observers.

    • To this must be added the inference from Galileo's observation that it is not possible to detect whether a ship is moving (at a constant speed) by an experiment from inside the ship. This implies that since are no fixed reference points in space, it is not possible to identify absolute motion. All motion is realtive and it is only possible to detect relative motion between inertial frames of reference.

    These condtions implied that time was variable and that absolute time had to be replaced by a new absolute: the speed of light introducing a new framework for all of physics.

    This further implied that the time and space experienced by moving bodies become distorted according to the following rules:

    For bodies with an invariant or static length L' and invariant or rest mass m' moving with constant velocity v relative to an observer:

    • The time t' measured by a clock on a moving body (the dilated time), compared with the time t measured by a static observer's clock during the same period, becomes dilated by a factor γ (gamma), known as the Lorentz transformation (but see Note below) equal to 1/√(1 - v2/c2) where c is the velocity of light, so that:
    • t = t'/√(1 - v2/c2)

      Thus a fast-moving clock will tick at a slower rate than a stationary clock and it therefore runs more slowly. This means that a clock moving with the body measures a shorter time than the static clock during the same period and an observer moving with the body will correspondingly also age more slowly than a static observer.

      (The time t is simply calculated from the extra distance the light appears to travel for the moving observer travelling with velocity v as observed by the fixed observer during the time t' assuming a fixed speed of light c.)

    • At the same time, the length L of the moving body contracts by a factor 1/γ, becoming shorter in the direction of travel, with the contraction given by:
    • L = L'√(1 - v2/c2)

    • Similarly, the so called relativistic mass m of a moving body increases with the velocity and is given by:
    • m = m'/√(1 - v2/c2)

    These adjustments are almost imperceptible in daily life where our movements rarely, if ever, reach 1000 km/h (0.28 m/s) compared with the speed of light of 299,792,488 m/s. However in particle physics experiments when particles may be accelerated to over 99.9% of the speed of light, these adjustments due to relativity are highly significant. Two examples:

    • A passenger on an airplane travelling at 920 Kph (572 Mph) for 8 hours as measured by an observer on the ground, will land 0.000,0003 seconds earlier than the sheduled 8 hours according to the time indicated by the passenger's own wristwatch and will have aged by one third of a microsecond less than the static observer.
    • A traveller on a spaceship travelling at 80% of the speed of light on a round trip of 6 years as measured by the spaceship's clock (and his body clock) will find that everybody on Earth will be 10 years older on his return.

    An observer positioned on the moving body, and moving with it, will however have a relative velocity of v=0 with respect to the moving body. In this case γ will be equal to one so that the observer will not witness any time dilations, length contractions or increases in mass and will experience time t', length L' and mass m' just as if the body had been stationary.

    Note

    The prior existance of the Lorentz transformation might imply that Lorentz anticipated or discovered special relativity before Einstein. In fact it was a mathematical expression developed by Lorentz to represent a completely different physical phenomenon which turned out to be an erroneous hypothesis. Neverthless, the name has endured and it is a coincidence that Einstein's relativistic relationships can be represented by the same mathematical expression.


  • In his last paper in 1905, Einstein asserted the equivalence of mass and energy with the expression E = mc2.
  • He used his special relativity theory to show that the energy of radiation such as light bursts emanating from a body will depend on its frame of reference. Assuming a light burst carries energy E from a body in a stationary frame of reference, the same light burst, from a body in a moving frame of reference with velocity v, will be γE, so that the difference in energy is E(γ-1). Einstein showed that to a very close approximation that E(γ-1) ≈ ½ E v2/c2. But since energy must be conserved, this energy difference must arise from the difference in the kinetic energies of the object in the two frames of reference which is ½mv2. Thus:

    E(γ-1) ≈ ½ E v2/c2 = ½mv2

    Since v does not change, this expression reduces to:

    E/c2 = m (the change of mass)    or    E=mc2

    This shows that the energy of the light burst comes from the reduction in the mass of the object.


By 1906 Einstein was still working at the Patent Office where, despite his achievements, he was promoted only to "technical expert second class".

It was for the discovery of the law of the photoelectric effect that Einstein eventually received the Nobel prize in 1921, not for the theory of relativity or E = mc2 as is popularly supposed.


In 1915 Einstein published a paper on General Relativity (subsequently modified several times) in which he expanded on his theory of special relativity to include the effects of gravity. He recognised that local experience of acceleration and gravity were the same, a phenomenon which he called the equivalence principle. By replacing special relativity's relative frames of reference representing bodies with constant velocity with frames representing constant acceleration, he thus brought the effects of gravity into a more universal theory. This brought "time" into the equation and required the unfamiliar concept of four dimensional mathematics to represent the interrelationships beteween the three dimensions for the familiar "3D" orthogonal axes of space and the new dimension of time.


Einstein's former teacher at Zurich Polytechnic, German mathematician Hermann Minkowski showed in 1907 that the special theory of relativity, could be understood geometrically as a theory of multi-dimensional "spacetime" where space and time are merged together in a single entity which became known as "Minkowski spacetime". This enabled the properties of space and time to be measured in the same units so that they could be represented in a single spacetime grid diagram. However this formal diagram, while helpful, still did not provide a realistic illustration of the effects of the forces involved.

See an example of a Minkowski Spacetime Diagram


In 1912 Einstein sought the help of Marcel Grossmann, the head of the mathematics department at the Zurich Polytechnic, to find a better visual analogy which fully described the physics of general relativity. Grossman pointed out that the problem was due to the limitations of the familiar Euclidean geometry of flat space to illustrate curved surfaces and their effects and that this problem had already been solved in 1854 by German mathematician Georg Bernhard Riemann.

Riemann had devised a new geometry based on the extended use of vectors for calculating distances on curved surfaces. Euclidean vectors normally describe a quantity acting in one dimension with two components, a magnitude and a direction. Riemann extended this concept to quantities acting in multiple dimensions so that a vector acting in three dimensions would have six independent components and a quantity acting in four dimensions would have 16 components. This was called a metric tensor. (A tensor can be considered to be a multi-dimensional matrix)

Thus he showed that the properties of four dimensional space can easily be represented in an abstract mathematical form by tensors (easily understood at least by expert mathematicians and physicists). This enabled Einstein to develop a visual analogy known as the spacetime grid or metric which better illustrates the physics.

See an example of the Spacetime Grid showing the warping of space due to the influence of the massive stars and planets in the Universe.


Einstein also predicted that a consequence of the distortion of spacetime is the gravitational deflection of light as it passes a massive celestial object such as a star. This effect is called gravitational lensing. If this could be demonstrated, it would verify his general relativity theory.

In 1917 Einstein's challenge was taken up by Frank Watson Dyson the British Astronomer Royal, and astrophysicist Arthur Stanley Eddington who together devised an experiment to investigate the issue. Development of the plan was not easy since it involved cooperation between scientists from Britain and Germany, two nations which were then at war with eachother.

The method was to observe the light from the stars in the part of the sky surrounding the Sun. But because the intense light from the Sun washes out the light from the stars, they would have to wait until the occurrence of a total solar eclipse when the Sun is masked the moon so that the light from the surrounding stars would be visible in the darkness.

The first opportunity was the predicted solar eclipse of 19th May 1919 which could be observed in Africa and Brazil. Eddington took a team to the island of Principe, off the west coast of Africa, and Charles Rundle Davidson from the Grenwich observatory took another team to the city of Sobral in Brazil to maximise their chances of success in case of adverse weather conditions.

The measurements were indeed hampered by the weather as had been feared but they went ahead anyway.

It rained on Principe on the morning of the eclipse and Eddington was only able to capture images through fleeting clouds. He managed to take 16 photographic plates but later discovered that only two of these contained enough stars to tell whether their light might have been bent or not and unfortunately they were blurred. Nevertheless he was convinced that they confirmed the expected displacement of their images by the Sun.

At the same time, although it was unusually cloudy in Sobral, Davidson was slightly luckier when the sky cleared one minute before the totality of the eclipse. Unfortunately 19 images from his main telescope were also blurred and out of focus because the heat of the Sun had distorted the telescope mirror, but happily he obtained several relevant star images from a back-up telescope he had taken with him.

Despite the imperfections in the images the scientific community considered that they had confirmed Einstein's predictions.

Eddington announced his findings at the Royal Society on November. 6, 1919 and the spectacular news made the front page of most major newspapers around the world making Einstein and his theory of general relativity world-famous.


Similar experiments in subsequent years have confirmed Einstein's theory. In 2017 the first observation of the apparent displacement of a star due to bending of its light by another celestial body other than our Sun was made by the Hubble telescope. It measured the displacement of light caused by the chance alignment of two stars from outside our solar system with the line of sight of the telescope.

See a diagram of The Gavitational Deflection of Light as seen by the Hubble telescope


In 1924 Indian physicist Satyendra N. Bose working at the University of Dhaka wrote a paper entitled "Planck's Law and Hypothesis of Light Quanta" outlining an alternative derivation of Planck's Radiation Law which he sent to Einstein asking for his help in publishing it. Einstein was intrigued and translated Bose's paper into German, and had it published in Zeitschrift für Physik under Bose's name. Over the next few months Einstein clarified and expanded Bose's work and used his theories to investigate and develop what became known as the Bose-Einstein statistics which defined the possible quantum energy states of photon clouds. He compared the possible energy states of a cloud of identical matter particles such as electrons and a similar, though theoretical, cloud of identical light particles (photons) and concluded that the two particle types would behave differently. This was one year before, but consistent with Pauli's current notion that matter particles obeyed the exclusion principle, and consequently had fewer degrees of freedom than the photons which did not suffer from the same constraint and would therefor have different properties.

In subsequent years, the special case of the photons which had radiation-like properties and were characterised by having integer spin was extended to a general class of all particles with zero or integer spin which possess similar properties. Paul Dirac named this class of particles "bosons" in honour of Satyendra Bose.

See more about bosons.


See also Fermi-Dirac statistics.


See also Einstein's Refrigerator.


Einstein once said "The hardest thing in this world to understand are income taxes".


1909 Cambridge undergraduate student Geoffrey I. Taylor, investigating the corpuscular nature of light, repeated Young's double-slit experiment using photons as the light source. A very low intensity light source, further attenuated by darkened glass plates, was used to produce a beam of individual photons to illuminate the target. It took three months to transmit sufficient photons to produce a photographic image of the light fringes on the screen. While the photon source was not perfect, the interference fringes produced were similar to those produced by Young, demonstrating the interference between quantum particles, confirming the corpuscluar theory of light and the principle of wave-particle duality and suggesting the possible wave nature of matter. The oddest result however was that the interference occurs even if only one particle is fired at a time. A single particle seems somehow to pass through both slits at the same time, interfering with itself. This behaviour is now known as a superposition of states. His results were later confirmed with more precise equipment by others. See diagram of Young's and Taylor's double-slit experiments.


One important conclusion was that a particle seems to behave like a particle when it is created (emitted) or annihilated (absorbed) but as a wave while in transit.


In 1961 Taylor's double-slit experiment was repeated using electron beams as the source by German physicist Claus Jönsson of the University of Tübingen confirming Taylor's prediction of the wave nature of matter.


1913 Niels Bohr a Danish physicist working under Rutherford at Manchester University applied quantum theory to molecular structure proposing a more detailed model of the atom with electrons existing in distinct orbits or shells that had discrete quantised energies, or specific energy levels. Known as the Bohr Model it was later modified to take into account Heisenberg's uncertainty principle which indicated that the electrons occupied distinct shells confusingly called "orbitals" but their position within the orbital was random and could not be known.

Later, he also used Gamow's liquid drop model of the atom to explain nuclear fission.

Bohr proposed that the chemical properties of the element are largely determined by the number of electrons in the outer orbits and introduced the idea that an electron could drop from a higher-energy orbit to a lower one, emitting a photon (light quantum) of discrete energy. This became the basis for quantum mechanics for which he was awarded a Nobel prize in 1922.


Bohr's model of the atom introduced the idea of energy states and quantum numbers and provided the basis for the Pauli Exclusion Principle. It also provided an explanation of the theory behind the Emission and Absorption Spectra of the hydrogen atom as well as the logic behind the groupings in the Periodic Table of the Elements.


See how this phenomenon is also used in Atomic Clocks, another practical application.

In contrast to his mentor Rutherford's predictions, Bohr is quoted as saying "Prediction is very difficult, especially about the future"


1916 Using the photoelectric effect Millikan determined Plank's constant directly - verifying the 1905 Einstein theory of the photoelectric effect and the quantum nature of light. (After ten years trying to prove Einstein's photon or particle theory that light was wrong, he eventually succeeded in proving it was right.) He was awarded the Nobel prize for this work in 1923.See a diagram and explanation of the Millikan's Determination of Planck's Constant.


1916 German physicist Arnold Johannes Wilhelm Sommerfeld enhanced the Bohr theory of the atomic structure by introducing non-circular orbits, by allowing quantised orientations of the orbits in space, and by taking into account the relativistic variation in the mass of the electron as it orbited the nucleus at high speed. These properties or quantum states were characterised by three quantum numbers in what is now called the Bohr-Sommerfeld model of the atom.


See also Fukui's theory of molecular orbitals

.

1922 German physicists Otto Stern and Walther Gerlach, demonstrated that atomic scale systems have intrinsically quantum properties. They devised an experiment which showed that atoms have spin attributes and the spin is quantised. See a diagram and explanation of the Stern-Gerlach Experiment.


When certain elementary particles move through a magnetic field, they are deflected in a manner that suggests they have the properties of little magnets. This behaviour is very similar to that of a familiar classical spinning charged object in a magnetic field and hence the elementary particle was deemed to have characteristics of spin.

Unfortunately this is a misleading analogy since elementary particles are not solid but point-like and the notion of spin is questionable, as is also the notion of spinning large atomic particles. The spin of composite particles is calculated by summing the spins of their constituent elementary particles. The mechanics of such arrangements are difficult to envisage. In the case of the triplets of quarks which make up protons and neutrons, it is not possible to know the orientation of the individual quarks and thus impossible to predict the net spin of the group. Quite possibly the so called "spin" is another name for a little understood phenomenon which produces similar results.

Nevertheless the notion of spin has been useful to describe particle deflections as being related to "intrinsic" spin, even if this is not strictly true, since it allows other particle behaviours to be predicted and has become an essential tool in understanding all interactions involving subatomic particles.

Subsequent theories and experiments by Pauli, Uhlenbeck and Goudsmit and others have confirmed that spin is an intrinsic form of angular momentum carried by elementary particles, composite particles (hadrons), and atomic nuclei and that a particle's spin never changes and has only two possible orientations, spin-up and spin-down. The Higgs boson is an exception having no spin.


1923 American physicist Arthur Holly Compton provided the first widely accepted experimental evidence that electromagnetic radiation can exhibit both particle and wave behaviour. He observed that the wavelength of X rays increased when they lost energy in collisions with, or were scattered by, electrons or low atomic weight elements. Since X rays were considered to be high energy photons, this observation was consistent with Einstein's quantum theory and Planck's Law which states that a wave's energy is inversely proportional to its wavelength.

Compton earned the 1927 Nobel Prize in Physics for this discovery.


1924 French aristocrat who came to physics late in life after studying humanities and receiving a degree in history, Prinz Louis-Victor Pierre Raymond, duc de Broglie, speculated that nature did not single out light as being the only matter which exhibits a wave-particle duality. He proposed that since light waves could be considered as particles the converse should be true and ordinary "particles' such as electrons, protons, or bowling balls could also exhibit the characteristics of waves. The relationship was neatly summarised by the following equation known as de Broglie's Law:

λ = h/p

Where λ is the wavelength - the property of a wave, p is the momentum - the property of a particle and the constant of proportionality is h - Planck's constant.

His theory was confirmed in 1927 by J.J. Thomson and others who demonstrated wave-like properties of the electron.

de Broglie was awarded the Nobel Prize in 1929 for his work on subatomic particles.


See more about Wave - Particle Duality


1925 Swiss theoretician Wolfgang Pauli explained why electrons orbiting an atomic nucleus do not all fall into their lowest energy state due to attraction from the positive protons in the nucleus. The previous year, in order to correct inconsistences in the three quantum state model of the atom proposed by Niels Bohr and Arnold Sommerfeld, as well as problems in the developing theory of quantum mechanics, Pauli had proposed that atomic particles must have a new quantum "degree of freedom" (or quantum number) with two possible values.


He was of course aware of the 1922 Stern-Gerlach Experiment indicating quantum spin properties of atoms but further support was provided by two Dutch physicists, George Uhlenbeck and Samuel Goudsmit working at Leiden University. In 1925 they published a paper suggesting that besides angular momentum of the electron due to its orbital motion around the nucleus, it also had intrinsic axial "spin" angular momentum like the rotation of the earth as it goes around the Sun.


After initial scepticism about the possibility of rotating electrons, Pauli proposed that besides orbiting the atomic nucleus, the electrons must also have spin properties. Thus the electron can have four quantum states characterised by 4 quantum numbers which define,

  • the distance of the electron from the nucleus,
  • its kinetic energy (based on its angular momentum),
  • its magnetic moment (based on the azimuth angle of the plane of the orbit)
  • plus the intrinsic magnetic moment of the electron itself due to its spin.

In 1928 Paul Dirac provided the theoretical justification for Pauli's proposition.


Pauli is remembered more for the principle proposed in 1925 that no two electrons in an atom can occupy the same quantum state (the same wave). If they did they would cancel each other out. This is now known as the Pauli Exclusion Principle. This principle also provided the theoretical basis for the Mendeléev's Periodic Table of the Elements. It also explains why, despite the empty space between the atomic cloud of an atom and its nucleus, the atoms behave as solids and can not be compressed into a smaller space.


In 1940 Pauli also formulated the spin-statistics theorem, which states that matter particles, which have 1/2 integer spin (fermions) obey the exclusion principle while force particles (bosons) which have integer spin do not. This implies that only one fermion can occupy a given quantum state at any time, while the number of bosons that can occupy a quantum state is not restricted.


He was awarded the Nobel Prize in 1945 for his "discovery of a new Law of Nature". One of the giants of twentieth century theoretical physics he was notorious for his rudeness. He was also known for the "Pauli Principle" which predicted disaster for any piece of apparatus with which he was involved.


1925 German physicist Werner Heisenberg, working at Göttingen, proposed a new model for the properties of the atom showing that it had different quantised energy states represented by frequencies and intensities. At the time, current methods of describing the atom with physical analogues of orbiting electrons could not account completely for its behaviour and there was incomplete understanding by physicists about the precise nature of particle events and interactions. (They're still not completely understood). Heisenberg didn't like the notion of Bohr's imaginary electron orbits and instead based his model on observable qualities which could actually be measured before and after an event. Quantum jumps replaced the Bohr Model's electron orbits. At the suggestion of, and assistance by colleague Max Born and Born's assistant Pascual Jordan a talented theoretical physicist, these quantum states were incorporated into matrices. Known as Matrix Mechanics theory, it was a mathematical abstraction, but it worked.

The following year, Pauli used the new matrix mechanics to derive the observed spectrum lines of the hydrogen atom securing credibility for Heisenberg's theory.

See examples of Heisenberg's Matrix Mechanics


In 1928 Heisenberg, Born and Jordan were nominated by Einstein for the Nobel Prize in Physics for their creation of quantum mechanics but in 1933, when the delayed 1932 award was eventually announced, Heisenberg alone received the honour.

Jordan's contribution was not recognised, some say because of his involvement in the Nazi party which he joined in 1933 becoming one of Hitler's Storm Troopers.

Born who was Jewish was similarly overlooked, it is claimed because of speculation about his association with Jordan. He was suspended from his post at Göttingen in 1933, together with five fellow Jewish professors, when the Nazi party came to power and emigrated to England where he was offered a post at Cambridge. He was eventually awarded a Nobel Prize in 1954, albeit for his work in 1926 on statistical mechanics.


Alternative models were also developed around the same time by Schrödinger (1926) and Dirac (1928).

See also Heisenberg's Uncertainty Principle.


Heisenberg was appointed to be head of Germany's atomic weapons programme during World War II and although, through the pioneering work of Szilard, and Hahn and Strassmann on nuclear fission, Germany was ahead of the Allies before the war, by 1945 they were still a long way from being able to produce an atomic bomb and never even achieved a chain reaction.


1926 Building on de Broglie's wave-particle duality hypothesis, Austrian physicist Erwin Schrödinger formulated a theory for the behaviour of atomic particles which has the same central importance to Quantum Mechanics as Newton's laws of motion have for the large-scale phenomena of classical mechanics. In contrast to Heisenberg he focussed on the qualities which were directly connected to the progress of physical reactions involved in the performance of experiments rather than just their outcomes and used familiar concepts of standing waves and harmonics to describe particle energies.

Schrödinger's Wave Equation takes into account the changes over time of a physical system resulting from quantum effects, such as wave-particle duality, even though the process of these transitions could not be measured, and was proposed as an alternative to Heisenberg's Matrix Theory (above). It describes the atom in the form of the probability waves (quantum fields or wave functions) that govern the motion of small particles, and it specifies how these waves are altered by external influences. He realised that the possible orbits of an electron would be limited to those accommodating standing waves, that is, with an exact number of wavelengths. This permits only a limited number of possible orbits and no possible orbits between them. Thus, when an electron "jumps" from a higher energy orbit (quantum state or eigenstate) to a lower energy orbit, a photon is released which has energy equal to the difference between the two energy states. This explains why photons are only ever released with certain discrete quanta of energy.

Schrödinger's theory of Wave Mechanics explained some of the hitherto inexplicable behaviour of atomic particles by considering them as waves not particles and the wave equation predictions were borne out by experimental results.

The Wave Function Ψ, shown in the equation below below, is a linear, time dependent, differential equation which describes the change in the system wave function, its quantum state, resulting from a change in the total system energy H (known as the Hamiltonian). It indicates the probability of a single quantum object, such as an electron, with mass m in a field with potential V moving in a single spacial dimention, being at some location x at a given time t while simultaneously behaving as a wave.

Schroedinger's Wave Equation

The Hamiltonian (energy) Operator Ĥ is a linear mathematical function representing the time evolution of a quantum state which operates on the wave function (Ψ) to determine the change in total system energy.

Planck's Constant h divided by 2 pi (h bar) is a constant equal to Planck's constant h divided by . Known as Dirac's Constant, it is the quantisation (unit) of angular momentum and is needed to convert the right-hand side of the equation into a value with units of energy to match the left-hand side.

d2/dx2 is the differential operation.

The function Ψ may also include parameters representing other properties of the particle and the equation can also be extended to include motion in three orthogonal dimensions x, y and z.

Schrödinger considered his model closer to classical physical theory and less of an abstraction than Heisenberg's model. Paul Dirac later proved that these two models were equivalent.

Heisenberg's and Schrödinger's theories represented another break from classical physics, in which the locations of observable objects can be known with certainty, and the physics of fundamental particles, in which only the probability of finding the particle at a given location can be known.


Schrödinger contributed to many branches of physics including quantum theory, optics, kinetic theory of solids, radioactivity, crystallography, atomic structure, relativity and electromagnetic theory. In 1935 he published the famous Schrödinger's cat paradox which was designed to illustrate the absurdity of the probabilistic notion of quantum states. This was a thought experiment where a cat in a closed box either lived or died according to whether a quantum event occurred. The paradox was that both universes, one with a dead cat and one with a live one, seemed to exist simultaneously until an observer opened the box. In his later years he applied quantum theory to genetics. He coined the term "Genetic Code"and published an influential book "What is Life" which inspired Watson and Crick in their search for the structure of DNA.

He also studied Greek science and philosophy and published his thoughts in his book "Nature and the Greeks".

He was awarded the Nobel Prize for physics in 1933.


Schrödinger's wave mechanics provided the foundation, built on by Heisenberg, Dirac and others, for explaining the behaviour of electrons, nuclei, atoms, molecules and chemical bonding, fundamental building blocks or processes used in galvanic cells, as well as nanotechnology and the phenomena of nuclear fusion and superconductivity, processes used in the generation and distribution of electric power. Quantum mechanics also represents behaviour of electrons and "holes" (the absence of electrons) in semiconductors and the process of electron tunneling used in Scanning Tunneling Microscopes and other electronic devices. For the future, research into the possibilities of quantum computers whose bits can be both 0 or 1 at the same time, depending on the electron spin, performing calculations at unprecedented speed are also founded on the quantum theories of Schrödinger, Heisenberg, Dirac and their successors.

It was almost forty years before before the principles demonstrated by Volta in his voltaic pile were successfully put to use by the telegraph pioneers in commercial products. In the case of Faraday's motor, it was almost sixty years before a market was created. Watch out for Schrödinger's kittens.


A man of many accomplishments Schrödinger's life was both colourful and complicated. He had an informal manner and throughout his life he travelled with walking boots and rucksack which raised a few eyebrows at the many conferences he attended. He served in Italy and Hungary during the First World War. Later he was an opponent of Nazi rule in Germany which brought him several brushes with authority. As an eminent physicist he also received many offers of positions in the world's best universities and at various times he help posts at Graz, Berlin, Breslau, Zurich, Oxford, Princeton, Edinburgh, Rome, Dublin, Gent and Vienna. His relationships with women were however even more wide-ranging. He had numerous lovers with his wife's knowledge (even more Schrödinger's kittens) and she in turn was the lover of one of Schrödinger's friends. While at Oxford he brought his colleague Arthur March from Germany to be his assistant since he was in love with March's wife who was pregnant with his child and he lived openly with his new daughter and two wives one of whom was still married to another man. During his time in Dublin he fathered two more daughters with two different Irish women.

And in between he also found time to do a little physics....


1926 German physicist Max Born, working at Göttingen, fortuitously discovered that the probability of finding an undisturbed particle or quantum entity at a given point is proportional to the square of the amplitude of the particle's wave function, |Ψ|2, at that point. This means that at any given time there is a distinct probability associated with finding an individual particle at various different locations but since there is only one particle, the total of all probabilities must sum to 1.0. More generally, the probability of the existence of a quantum state is given by the square of its individual wave function. This is known as Born's Law. Thus he was able to reconcile particles with waves by treating Schrödinger's wave as the probability that an electron will be in a particular position.


The probability of finding a particle in different positions (or states) is often incorrectly interpreted as implying that a particle can exist at two or more places at the same time. The reality is that if a measurement is taken, the system will be disturbed and particle will be found in a unique position with probability 1 and all other probabilities will fall to zero. This is known as the collapse of the wave function and is an inevitable consequence of measurement affecting the system being observed.


Quantum physics was no longer exact and deterministic but probabilistic.


In other words a quantum particle doesn't exist in one state or another, but may exist in a weighted mixture of the probabilities of all of its possible states at the same time known as the superposition of states. This is a property of wave mechanics, similar to waves in classical physics, that any two (or more) quantum states can be added together and the result will be another valid quantum state even if the states are incompatible with eachother. Conversely, every quantum state can be represented as the sum of two or more other distinct states. It includes all the possible probabilities of the potential wave functions of measurable quantities such as momentum, position, spin up and spin down, energy and duration, and polarisation angle which may be intermingled until a measurement is made. A wave can thus be thought of as a "wave of probabilities".

An example of a mixed state is a particle with a 50% probability of spin up and a 50% probability of spin down or some other mixture of the two states. It is only when an observation or measurement is made that its actual state is revealed and it settles into a single definite state, either spin up or spin down, with a probability of 1. The measurement does not indicate the average of the two (or more) states, nor does the particle take up such an "average" state. Once the measurement has been made, the particle's properties begin to dissolve again into a new superposition of states.

This implies that, though a particle or quantum entity can be in multiple possible states at the same time, it can not however be observed in this indeterminate state. Such indeterminate states can only exist when they are not being observed.


The property of quantum superposition is consistent with Schrödinger's wave equation which is linear and any linear combination of solutions will also be a solution. Note however that this is purely a mathematical model which represents the state of the particles at key times. It does not describe or explain their behaviour.


Born was awarded the Nobel Prize in 1954 for the his work on the statistical interpretation of the wave function.

The singer Olivia Newton-John is a grand-daughter of Born.


1927 Heisenberg formulated the Uncertainty Principle which is of fundamental importance in particle physics. This is another manifestation of measurement affecting the system being observed. It is also a consequence of the wave - particle duality in the behaviour of quantum objects, recently predicted by de Broglie.

It was known by experiment that some pairs of quantum properties were related to eachother by complementarity and uncertainty. Known as conjugate or complementary pairs, decreasing the uncertainty of one variable of the pair increases the uncertainty of the other. Examples of such pairs of variables are "position with momentum" and "time (or duration) with energy", which both suffer similar uncertainties. Note that one of the variables in each conjugate pair is associated with wave aspects of the particle (this includes position and time), while the other variable in the pair, (such as energy or momentum), is associated with its particle aspects.

The notion of complementarity, conceived by Bohr, recognises the different characteristics of waves and particles so that objects have certain pairs of complementary properties which cannot all be observed or measured simultaneously.

The wave represents possibility or potentiality. The particle represents reality.

Heisenberg's uncertainty principle is an example of complementarity. It means that it is impossible to determine simultaneously, the magnitude of both elements of the pair such as a particle's position and its momentum. The more accurate one is measured, the more inaccurate the other becomes. The result achieved depends on the nature of the measurement, whether it is looking for a particle or a wave.

An example of such uncertainty is that in order to determine the position of a particle such as an electron it must be illuminated by a light wave (photons), or for more accuracy, by shorter wavelength gamma γ rays which are more energetic. But the interaction of the photon or gamma ray with the target electron is in effect a collision which causes the electron to recoil disturbing its momentum, the more energetic gamma ray causing a greater change in momentum. Though the electron's position may be determined, its corresponding momentum can not be accurately known.


Note also that apart from the uncertainty, taking a measurement also causes the collapse of the wave function, a much more dramatic effect. Born's Law explains why.


1927 At a September conference at Como in Italy, Niels Bohr aided by Heisenberg, proposed a unified and consistent view of the atom and its behaviour taking into account the quantum nature of his original model, but not its notion of orbits, complementarity and the recent developments by Heisenberg of matrix mechanics including the uncertainty principle, the disturbance to the system created by taking measurements, as well as Born's view that only the probability of a quantum state can be known, not its certainty unless it is measured directly and such measurements cause the collapse of the wave function. He did not attempt to provide a theory describing the transitions between energy states, though the relevance of Schrödinger's wave theories to the superposition of states was recognised. Bohr's view was that the proposed matrix mechanics theories worked and detailed models of the interactions and transitions were neither accurate nor necessary.

This collection of ideas became known, though not yet at the time, as the Copenhagen Interpretation after Bohr's institution in Copenhagen. To those still seeking a more in depth theory of particle behaviour, it has however since been called "The shut up and calculate interpretation", a name coined by Cornell physicist N. David Mermin but often attributed to Richard Feynman.


Einstein who was not present at Como, was very unhappy about this apparent randomness in nature described by the Copenhagen Interpretation. The following month at the Solvay Conference in Brussels he called for a deterministic theory, the precise reality of the events and quantum states, not the probabilities of their occurrence. His views were summed up in his famous phrase, "God does not play dice". Schrödinger was similarly unconvinced. They considered the theories based on events while ignoring the processes in between to be incomplete. To the end of their days, Bohr and Einstein challenged eachother's models of quantum physics with Einstein emphasising wave‐particle duality, causality and determinism while Bohr championed the discrete energy states, quantum jumps, indeterminism and probability.


1927 British physicist George Paget Thomson, son of J.J. Thomson, discoverer of the electron, working with Alexander Reid at Aberdeen University and simultaneously and independently, Americans Clinton Joseph Davisson working with Lester Halbert Germer at Western Electric Labs, confirmed de Broglie's hypothesis of the wave particle duality of the electron. Thomson created transmission interference patterns by passing an electron beam through a thin metal foil and Davisson created diffraction patterns of electron beams reflected from metallic crystals, both confirming the wave nature of the electron.

Thomson and Davisson were awarded the Nobel Prize for physics in 1937.


1928 British physicist Paul Adrien Maurice Dirac working on quantum field theory at the Cavendish Laboratory in Cambridge combined the theories of quantum mechanics of Bohr and Pauli with Maxwell's electromagnetic field theory to model the properties of the electron. He introduced the concepts of special relativity and electron spin, which gave the electron its internal magnetic properties, into Schrödinger's wave equation, (properties which Schrödinger had not been aware of), to develop the Dirac equation which was consistent with both Heisenberg's matrix mechanics and Schrödinger's wave mechanics. Dirac used matrix algebra to incorporate more parameters into Schrödinger's wave function Ψ while at the same time enabling it to be simplified and expressed as a first order (linear) differential equation (∂).

Dirac's equation, shown as follows in summary form, introduced gamma (γ) matrices and the imaginary unit (i=√-1) to take account of the electron spin and Einstein's special relativity.

i γ.∂Ψ = mΨ

Dirac's model could treat the electron as either a wave or a particle and still get the right answers. It was the first expression of relativistic quantum field theory and marked the beginning of a new branch of physics - Quantum Electrodynamics - QED.

Originally developed to describe the behaviour of an electron, Dirac's elegant and simple equation, with such wide and deep applications, was soon accepted as representing the behaviour of all particles with half integer spin: that is fermions, and includes all atoms and every matter particle in the universe.


It is a measure of Dirac's greatness that his equation was not conceived as a theory to explain observed physical phenomena like most scientific theories in the past. It was derived from pure mathematical reasoning to illustrate or predict the possibilities of physical events or transformations which had neither been observed nor even imagined.


In 1931, Dirac used his equation to predict the existence of a particle with the same mass as the electron but with positive rather than negative charge. This "anti-particle", now called a positron, was detected by American physicist Carl Anderson in 1932. We now know that all matter particles have corresponding anti-particles with properties as permitted or described by Dirac's equation.


See also Fermi-Dirac statistics.


Dirac shared the Nobel Prize for Physics with Schrödinger in 1933.

Unlike Schrödinger, Dirac's shyness was legendary. When informed that he had won the Nobel Prize he told Rutherford that he did not want to accept it because he disliked publicity. Rutherford told him that refusing the prize would bring even more publicity!


Dirac had a traumatic childhood, raised in a cold, unconventional family. His father, a Swiss immigrant tothe UK was a strict and authoritarian tyrant who bullied his wife and insisted that his three children spoke to him only in French. Mealtimes were distressing as Dirac ate in the dining room with his father speaking only in French while his mother ate in the kitchen with his siblings speaking English. When Dirac found it difficult to express his thoughts in the perfect French his father demanded, he chose to remain silent. When he was 22 his brother Felix committed suicide at the age of 25 leaving him even more demoralised.

The results of this background had a profound and lasting effect on Dirac. He was withdrawn and lacking in emotion and found it difficult to communicate and socialise and so had few friends. He was famous for his long silences and was precise and taciturn in his speech, saying exactly what he meant and no more. He hated his father to his dying day.


Dirac's equation appears on his memorial stone in Westminster Abbey, close by the graves of Newton, Darwin and Rutherford and the recent addition of Stephen Hawking's ashes.


In 1933 Einstein was visiting the United States when Hitler's Nazi party came to power and passed laws barring Jews from holding any official positions, including teaching at universities. Because of his Jewish background, Einstein did not return to Germany and instead took up an offer from Princeton to be a resident scholar.

He had not given up his challenges to Bohr's Copenhagen Interpretation and working with younger colleagues, physicists Russian born, Boris Podolski and American-Israeli, Nathan Rosen he looked for flaws in Bohr's theory by investigating possible conclusions which could be drawn from it when applied to pairs or groups of particles forming a self contained system with an observer. Named later by Schrödinger as quantum entanglement such systems involve particles which may be generated from a single source resulting in strongly correlated properties between eachother and the possibility that quantum information can be exchanged between the two particles even after they are separated from eachother by very large distances, as far as across the Universe. It is as if the daughter particles remained in contact with eachother.


As an example of entanglement, consider an unstable spin 0 particle which decays into two different particles, particle A and particle B, heading in opposite directions. Because of conservation laws, since the initial particle had spin 0, the sum of the two new particle spins must also be equal to zero. If particle A is found to have spin up, then particle B, wherever it is, must have spin down (and vice versa). However, according to the Copenhagen Interpretation, until a measurement is made, neither particle has a definite state. They are both in a superposition of possible states, which means that the particles are both spin up and spin down at the same time, in this case with an equal probability (50%) of having spin up or spin down. No matter how far the two particles become separated, a precise measurement of the spin of the particle A would determine its spin with a probability of 1 and would result immediately in equally precise knowledge of the spin of particle B without needing to disturb it in any way. It does not require any communication between the two particles. This is because the act of measurement causes the collapse of the wave function which affects both particles simultaneously causing them to assume definite quantum states.

Notes: It is a necessary condition for entanglement that each particle is in a superposition of states. Though the changes in the particle states are correlated, it does not imply causation. Similar behaviour applies to other quantum properties such as position, momentum, polarisation etc. of entangled particles.


In 1935, as a result of their studies, Einstein together with Podolski and Rosen published a paper describing a thought experiment, now called the EPR paradox (after the authors' initials) in which they used the above considerations to refute Bohr's explanations about entanglement between remote particles. These theories required coordinated interactions between the entangled particles and information to pass between them to enable it. EPR claimed that neither of these explanations could be justified. Bohr's argument was that the entangled particles and the observer were not independent, but should be considered as a single system.

The EPR paper pointed out that an entity in some locality cannot influence some other remote entity unless there was some communication between them. Its influence is thus limited by its locality. To influence a remote entity would require a new phenomenon of separateness, called non-locality by Einstein, which "common sense" said was not possible. Without the communication between particles it would require the hitherto unknown phenomenon of non-locality which he called "spooky action at a distance". The paper also showed that the simultaneous change in the state of the remote particle would require information to pass instantaneously between two entangled particles at the time of the measurement, not just faster than the speed of light but infinitely fast and this was not possible.

Einstein believed that particles exist, whether they are observed or not, which he called objective realism and that the properties of the individual entangled particles must have been set from the start. He thus supported the idea of local reality. He also believed that a quantum particle's properties could be more accurately known than permitted by the Uncertainty Principle and suggested that Bohr's explanations were incomplete and that the necessary information to justify non-locality must be stored somewhere inaccessible in hidden variables.

In essence Einstein was judging Bohr's quantum mechanics using the laws of classical physics and for almost 30 years the EPR paradox remained a contested thought experiment.


In 1964 physicist, John Stuart Bell from Northern Ireland, working at CERN proposed a way to resolve the paradoxical issue. His view was that Einstein's EPR or Bohr's non-locality theories cannot both be true. One of them must be false. He therefore devised an experiment to test the different theories to determine which one was true. It involved directing two beams of entangled particle pairs from a single source in opposite directions to two detectors with the beams each passing through an intervening filter. See diagrams and description on the Bell's Inequality page. The number of particles in the different quantum states detected at local and remote locations were counted to determine how they were affected by the angle between the filter orientations and these measured numbers were compared to the numbers predicted by the different rival theories for the same conditions.

Bell initially thought that Einstein's "common sense" reasoning would turn out to be the correct explanation and considered it as the reference. He derived a relationship, known as Bell's Inequality summarising the expected EPR results that the correlation between the number of particles detected at the local and remote locations for different orientations of the filters between 0 and 90 degrees should be between zero and a maximum of 1, that is, less than or equal to (≤)1. Anything outside this range would be considered a violation of Bell's Inequality and must therefore be wrong.

Unfortunately at the time there was no reliable way of conducting such an experiment.


Various experiments were conducted over the following years to find a definitive answer. The first published experiment was by John Clauser, Michael Horne, Abner Shimony, and Richard Holt in 1969 using photon pairs and confirmed by Clauser and Stuart Freedman at the Lawrence Berkeley Lab in 1972. Using beams of entangled photons they counted the photons captured by each detector for a range of different filter angles and showed that contrary to Bell's expectations, measurements confirmed that Bohr's predictions were correct. Thus violation of Bell's Inequality provided the right answer.

The experiments have been repeated many times since with the same conclusions but it was notoriously difficult to obtain perfect conditions and many of the tests involved loopholes, such as possible communications between the detectors and other factors which could contaminate the results.

In 1982 a more definitive result was achieved by French physicist Alain Aspect and colleagues at the University of Paris with an experiment which eliminated the last of the known loopholes.

Thus Bohr's derided "spooky action at a distance" was finally proved to be correct and Einstein's "common sense' was eventually shown to be wrong, by which time however Einstein had been dead for 27 years.


Whilst it may seem that entangelement offers an ideal method of instant communications, experiments based on Bell's inequality have shown that this is not possible, however in certain situations entangled pairs can be used to co–ordinate behaviour with a distant partner.


In 1947, Investigating the spectral lines of the hydrogen atom, Willis Lamb and colleague Robert Retherford at Columbia University discovered two separate spectral lines around the wavelength λ of 656 nm (457,000 GHz) instead of the expected single line. They had expected two coincident lines corresponding to the s and p electron orbitals which Dirac's atomic theory had predicted should both have the same energy state. Working at the lower frequency end of the spectrum, and with the benefit of precision microwave instruments recently developed during World War II, they measured a relatively small frequency difference of 1.057 GHz between the two lines which became known as the Lamb shift. See examples of hydrogen atomic spectra. The measurements indicated that the corresponding two orbitals had slightly different energy states and that Dirac's theory must therefore be incomplete.

Lamb presented his findings at the Shelter Island Conference of invited physicists at Long Island, New York later that year where the results caused a stir.

The explanation given for the anomaly was that even though the hydrogen atom's two electrons had similar energies, just above the ground state, they behave as orbitals or clouds, not as point sources. They thus had different orbital states, one with a spherical shape (s state) and the other an elongated shape (p state) which meant that they should respond differently to the positive charges of the atomic nucleus.

Nevertheless, this was not enough and current electrodynamic theory could not explain the difference.


The Problem of Infinities. When an electric charge moves through space it generates an electromagnetic field. The self-energy of the electron results when it moves through its own self generated electromagnetic field. Current theory predicted that the self-interaction between a free electron and its own electric field would result in the electron having infinite energy and, from Einstein's energy mass equivalence, it would also have infinite mass. This is because the force on a charged electron is inversely proportional to the square of its distance from the source and since the source is its own electric field, the distance immediately next to the electron will be zero and the force will consequently be infinitely large, giving the electron infinite energy and infinite mass. For similar reasons it also meant that the Lamb shift would also be infinite. This is clearly not possible and the problems of infinities perplexed the conference delegates.


Renormalisation

Presesent at the conference was Dutch physicist Hendrik Kramers who had previously studied the problem of infinities and pointed to the solution but had not followed through. At the suggestion of his former teacher and fellow Dutchman Hendrik Lorentz he had introduced the idea that the observed mass of a charged particle such as an electron could be considered as being made up of two contributions, its intrinsic, hypothetical bare mass plus the infinite electromagnetic mass resulting from the electromagnetic self-interaction. The sum of these two contibutions, the observed mass, was also called the dressed mass. Dealing with the two contributions separately could possibly lead to a solution.


Kramers' ideas were enthusiastically picked up by Hans Bethe, who was also present at the conference, and who quickly found a way to eliminate the troublesome infinities from the calculation of the Lamb shift. His reasoning was as follows:

  • An electron in a Hydrogen atom has a total observed energy which is given by the energy it posesses by virtue of the specific orbital it occupies, plus a certain infinite self-energy which corresponds to the electron's electromagnetic mass, plus a small "correction" due to the distinct ways in which the different shaped electron orbitals are affected by the presence of the nearby atomic nucleus (in this case, a single proton).
  • In mass terms, it has a total, dressed mass given by its bare mass, plus the infinite electromagnetic mass.

By contrast:

  • A free electron - completely removed from the hydrogen atom - has a total observed energy given simply by its infinite self-energy.
  • In mass terms, it has a total, dressed mass given by the bare mass associated with the free electron, plus the electromagnetic mass.

Since both cases contain an infinite self-energy term, by subtracting the total (observed) energy of the free electron from the total (observed) energy of the electon in the hydrogen atom, the two infinite self-energy terms will disappear leaving just the electron's orbital energy in the hydrogen atom plus the "correction" which corresponds to the Lamb shift.

This is known as renormalisation.

Even though no two infinities can be expected to be the same, the method worked and Bethe's theory fitted Lamb's experimental results very well. To some people this appeared to be a miracle.


Bethe announced his discovery at a formal lecture at Cornell and admitted that his calculations did not take into account relativistic effects. Richard Feynman, who was present, went up to him and said "I can do it for you. I will bring it to you tomorrow". He delivered on his promise as he came to Bethe the next morning with a method that could resolve the relativistic issues, but both he and Bethe together had made an error somewhere that morning so the problem appeared unresolved. It was another two months of work before Feynman reworked the maths and realised he had been right all along that morning two months ago.


Similarly, in suitable other calculations, the infinite electromagnetc mass can be eliminated from the dressed mass to leave the bare mass.

More generally, in certain circumstances infinities can be attached as corrections to actual measured values of mass and charge as determined by experiments and may be absorbed in subsequent calculations to yield finite results in good agreement with experiments.


Bethe's idea of renormalisation provided the tools for subsequent harmonising of Maxwell's electromagnetics with the new quantum physics by Feynman, Tomonaga and Schwinger the following year.


Lamb was awarded the Nobel Prize in physics in 1955 for his discoveries related to the Lamb shift.


Renormalisation works, but it is still not fully understood, and to this day many physicists are uncomfortable with it, regarding it as a mathematical contrivance. Feynman accepted it, though he called it "hocus-pocus".


In 1948 Physicists, Japanese Sin-Itiro Tomonaga and Americans Julian Schwinger and Richard Feynman working independently, recast Maxwell's theories of electromagnetics to take into account subsequent advances in relativity theory, quantum mechanics and Dirac's model of the electron and his early work on quantum fields (see above) to formulate a more general theory of Quantum Electrodynamics (QED). While this was a major achievement, the QED rules only applied to the electromagnetic interactions between electrons (or their antiparticles) and photons and not to the larger protons and neutrons or to other recently discovered particles.

The trio were jointly awarded the Nobel Prize in physics in 1965 for their work on QED.


In 1948 Feynman also published visual aids to depict the possible interactions between particles based on QED rules. Known as Feynman Diagrams they were later expanded to incorporate reactions involving the weak nuclear force and the strong nuclear force once the relevant theory (QCD) had been developed. See description and examples of Feynman Diagrams.


In a 1999 worldwide poll of 130 leading physicists by the British journal "Physics World", Feynman was ranked as one of the ten greatest physicists of all time. He was a mathematical wizard with a deep understanding of physics and exceptional intuition. A charismatic and playful free thinker he was great educator and explainer, loved by his Caltech students whom he inspired. His energy and self esteem however sometimes irritated his colleagues.

During World War II, he worked on the Manhattan project at Los Alamos where, with Hans Bethe, he developed a formula for calculating the yield of a nuclear fission bomb.

He also played a pivotal role on the 1986 Rogers Commission investigating the space shuttle Challenger disaster, determining that the accident was caused by a design flaw in the O-ring seals intended to prevent the escape of combustive exhaust gases from the joints between sections of the solid rocket boosters.


Noted for his jokes and pranks, while at the highly secret Los Alamos labs, he spooked some of his colleagues by working out or guessing the codes of the combination locks on the safes and cabinets holding their top secret documents and leaving cryptic notes to let them know they had possibly been compromised. In his spare time he led a hedonistic lifestyle particularly in his later years when he was an enthusiastic player of the bongo drums, frequenting bars and nightclubs and eyeing the ladies.


Just when we thought we had an elegant and simple explanation of the structure of matter with three sub-atomic particles, a nucleus of protons and neutrons with electrons orbiting around it, along came quantum mechanics in the 1920s and shook the foundations of physics. But it didn't end there, the detection in 1932 by Anderson of the positron predicted by Dirac indicated the existence of a lower level of elementary particles which make up the basic building blocks of the sub-atomic particles. It initiated the discovery over the next 50 or more years of whole families of elementary particles including Leptons, Quarks, Bosons, Mesons and Baryons with each family including up to a dozen or more fundamental particles many of which have corresponding anti-particles. Examples are Muons, Gluons, Pions, Kaons and the whimsically named Up, Down, Top, Bottom, Strange and Charm Quarks to name but a few. While this is interesting, though several highly specialised applications have been developed, nobody has yet found commercially viable, mass market applications for these particles. But then Rutherford did not foresee any use for nuclear energy when he discovered nuclear radiation.


See more about Leptons, Quarks and the Standard Model of Particle Physics.





1900 Sales of internal combustion engined cars overtake sales of electric cars for the first time. More than half the world's cars are EVs.


1900 German physicist Paul Karl Ludwig Drude developed a model to explain electrical conduction based on the kinetic theory of electrons moving through a solid.


1900 Belgian car maker, Pieper, introduced a 3½ horsepower "voiturette" another variant of the hybrid electric vehicle (HEV). An electric motor/dynamo was mounted in line with a small petrol engine and acted as a generator during normal driving, recharging the batteries. For hill climbing the motive power was augmented by battery power as the electric motor was switched to supplement the power of the engine.

Later versions used higher capacity batteries (28 Tudor batteries in series) and a 24 horsepower engine connected to a higher power electrical drive via a magnetic clutch. The clutch mechanism allowed energy to be recovered by regenerative braking as well as the use of the higher power electric motor to drive the vehicle on its own.


1900 Irish born American John P. Holland launched his first submarine the Holland I in 1878. A crude design, carrying a crew of one, it was powered by a petrol engine and ran on compressed air when submerged. Holland was a sympathiser of the Fenian Brotherhood, an Irish revolutionary secret society, forefathers of the IRA, founded in the United States. He designed the Fenian Ram, a three man submarine which was launched in 1881, for attacking British shipping. Finding the Fenians unreliable customers he made several unsuccessful attempts to sell his submarines to the US government, eventually launching his sixth submarine the Holland VI in 1898. It was a dual propulsion submarine the which used a 45 h.p. Otto petrol engine for propulsion and battery charging while on the surface and a 110 Volt electric motor powered by 60 Lead Acid cells with a capacity of 1500 ampere hours for propulsion when submerged. This time his demonstration was successful and the submarine was purchased by the US government. It was commissioned in 1900 and renamed the USS Holland, also known as the SS1, becoming the US Navy's first submarine. Although it carried a crew of only five plus an officer, the Holland VI was a major breakthrough in submarine design. For the first time, all the major components were present in one vessel - dual propulsion systems, a fixed longitudinal centre of gravity, separate main and auxiliary ballast systems, a hydrodynamically advanced shape, and a modern weapons system. The configuration and design principles used in the Holland VI remained the model for all submarines for almost 50 years.


1901 Thomas Alva Edison in the USA also patents a rechargeable alkaline cell, the Nickel Iron (NiFe) battery. Another one of Edison's 1093 patents.

Nickel Iron batteries were very robust, designed for powering electric vehicles, but with the rise of the internal combustion engine their main applications became railway traction, fork lift trucks and utilities.


1901 Patent granted to Michaelowski in Russia for the rechargeable Nickel Zinc battery.


1902 The Mercury Arc Rectifier invented by American engineer Peter Cooper Hewitt. A spin off from developments of the mercury arc lamp it was capable of rectifying high currents and found use in electric traction applications which used DC motors.


1902 Twenty years after the introduction of electricity supply in the USA only 3% of the population were served by electricity.


1903 The invention of the Electrocardiograph by Indonesian born Dutchman Willem Einthoven was announced after a long gestation period. Building on Waller's work of 1887 (and the contributions of many others following in the footsteps of Galvani) it used a sensitive "string galvanometer" of Einthoven's own design to pick up small electrical currents from the patient's torso and limbs. (Galvani's theories about Animal electricity vindicated?)

Einthoven is now credited with the design of the electrocardiograph for which he received the Nobel Prize in 1924.


1903 On December 17th at Kitty Hawk, North Carolina, American inventors Wilbur and Orville Wright made the first controlled, powered flights in an airplane which came to be known as the Wright Flyer which they had designed and made themselves. They made two flights each. The first flight, piloted by Orville, lasted 12 seconds and covered a distance of 120 feet (37 m). The fourth flight of the day was piloted by Wilbur and lasted 59 seconds covering a distance of 852 feet (260 m) on a straight flight path.


The brothers were sons of a minister, Bishop Milton Wright of the United Bretheren Church and led very correct lives. They neither smoked, drank nor married and lived at home with their parents and always wore conventional business suits even while working on their machines. They ran a small bicycle building and repair business and in their spare time were enthusiastic participants in the sport of gliding which was popular at the time. Neither of them had more than a high school education, Orville dropped out in his junior year and Wilbur did not graduate, yet with only very limited resources and training they showed great scientific ingenuity and professionalism.


In 1901 they decided to apply their mechanical skills and their gliding experience to building a powered flying machine. They were familiar with the works of Smeaton, Cayley, Lilienthal, Chanute and Langley and building on this knowledge they conducted extensive tests to confirm their theories while investigating their own improvements. Key innovations which they introduced were:


  • The use of a wind tunnel which they built themselves to verify Smeaton's lift equation and to determine the aerodynamic efficiency of their designs. It provided more representative, smooth air flow enabling more accurate measurements than Smeaton's whirling arm which stirred and whipped up the ambient air as it rotated so that the model passed through moving, turbulent air instead of the desired still air causing a velocity offset as well as a degree of uncertainty in the measurements. Balance springs were used to measure the aerodynamic forces on the models. The wind tunnel was used to investigate the lift and drag of over 200 wing profiles and also to optimise the design of the propellers. They also extensively flight tested various structures as kites or gliders to determine their lift, stability and controllability.

  • They were the first to design their propellers with a cross section in the form of an aerofoil and achieved peak aerodynamic efficiencies of 82%, only slightly less than the 85% efficiency of a modern wooden propeller.

  • Conscious of the accidents which had taken the lives of Otto Lilienthal, and more recently English glider pilot Percy S. Pilcher, they realised the importance of maintaining the stability of the aircraft. With this in mind they devised methods to control roll, pitch and yaw to give the aircraft full manoeuvrability and their Wright Flyer was the first to incorporate this 3 axis control.
    • Pitch control was relatively easy to implement by means of an elevator (winglet), mounted in a canard configuration in front of the aircraft. It's angle of attack could be varied by the pilot enabling the pilot to climb or dive.
    • Similarly a rudder in the tail provided a simple method of yaw control enabling the aircraft execute a turn.
    • Roll control however was more difficult. From their observations of birds they concluded that birds made their bodies roll right or left by changing the angle of the ends of their wings. They replicated this control on their machine by attaching lines to the corners of the wings to twist or warp the wings when required to increase or decrease lift on the outer sections of the wings so that the aircraft could "bank" or "lean" into the turn just like a bird (or a bicycle). The plane was turned by a coordinated use of the yaw and roll controls.

  • The remaining design challenge was to construct a powerful but lightweight engine, a goal which had eluded many would be aviators in the past leaving them stuck on the ground. To save weight the brothers designed a very rudimentary engine which had neither fuel pump, oil pump, water pump, carburettor, spark plugs, battery, radiator, starter motor nor throttle. Weight was further reduced by the use of an aluminium crank case, the first use of this metal in aircraft construction.
  • The engine, which was made locally, had four horizontal inline cylinders with a total capacity of 202 cu in (3.3 litres) in a cast iron block and produced 12 horsepower from a four stroke cycle giving an acceptable safety margin over the minimum of 8 horsepower that they calculated to be necessary. It was placed on the lower wing next to the pilot and was connected by means of bicycle chains to two counter-rotating propellers located behind the wings in a "pusher" arrangement.

    In operation, petrol was gravity fed from a small tank with a capacity of 1.5 quarts (1.4 litres), mounted on a strut above the engine, and mixed with air in a shallow chamber next to the cylinders. Heat from the engine vaporised the fuel-air mixture, causing it to expand through the intake manifold where it was drawn into the cylinders.

    Ignition was produced by two contact breaker points in each combustion chamber which were opened and closed by a camshaft. The engine was started by means of a separate external coil and four dry-cell batteries, not carried on the aircraft, which generated the initial spark. The electric charge to generate the sparks while the engine was running was provided by a low-tension magneto.

    Cooling was by means of a water jacket surrounding the engine, gravity fed from another small tank above the engine. The water was not circulated. Instead the reservoir simply replenished the water which evaporated from the water jacket which as an integral part of the crank case casting. Lubrication of the internal engine components, the crankshaft and pistons, was by the splash method while a hand held oil can was used to oil the external components such as the camshaft and the bicycle chains.

    The engine weighed 180 pounds (82 Kg) dry weight and the total weight of the aircraft was 700 pounds (318 Kg).


  • The need for pilot skills was not overlooked. They themselves made over 1000 flights on a series of gliders at Kitty Hawk between 1900 and 1902 to develop their skills and were at the time the most experienced pilots in the world.

The entire project cost less than $1000 which Wilbur and Orville paid out of their own pockets. Not a bad result for a couple of country boys.


By contrast, a similar attempt at powered flight took place just one week earlier when Washington insider, Prof. Samuel Langley's flying machine, funded by grants of $50,000 from the US government and $20,000 from the Smithsonian Institution, was unable to get off the ground and ended up unceremoniously in pieces in the Potomac (its intended landing point).

While Langley's failures hit the headlines, the Wright brothers' momentous flight was largely ignored by most of the press including the Scientific American and New York Times, perhaps chagrined by their inaccurate prediction only a few days previously. As late as 1905 they were still suggesting that it was a hoax and in 1906 a headline about the Wrights in the International Herald Tribune proclaimed "FLYERS OR LIARS?".


On October 5, 1905 in the third, improved, Wright Flyer, Wilbur made a circling flight of 24 miles (38.9 km) in 39 minutes 23 seconds over Huffman Prairie, Dayton, Ohio, returning to the point of takeoff, conclusively demonstrating the plane's manoeuvrability.


1903 Following on from their work on radiation, Soddy and Rutherford proposed that the phenomenon of radioactivity was due to the spontaneous atomic disintegration of unstable heavy elements into new, lighter elements, an idea which, like many new scientific theories, was treated with derision at the time.

Soddy was a chemist and Rutherford a passionate physicist who believed that chemistry was an inferior science to physics. Ironically it was Rutherford rather than Soddy who was honoured in 1908 with the Nobel Prize for chemistry for the discovery of radioactive transformation. Afterwards Rutherford liked to joke that his own transformation into a chemist had been instantaneous. Soddy resented the fact that his contribution had not been recognised. He was however eventually awarded a Nobel prize in 1921 for his work on isotopes but that did little to mitigate his earlier slight.


1903 British Patent awarded to German Albert Parker Hanson, living in London, for flexible printed wiring circuits intended for use in telephone exchanges. Based on flat parallel copper conducting strips bonded to paraffin waxed paper. The design used a double layer construction with the copper strips in alternate layers perpendicular to the layer below forming a rectangular grid. Interconnections were crimped through holes in the paper. As well as through hole connections, Hanson's patent also described double-sided and multi-layer boards.


1903 The Compagnie Parisienne des Voitures Electriques produced the Krieger front wheel drive hybrid electric vehicle (HEV) with power steering. A petrol engine supplements the battery pack.


1903 Russian botanist Mikhail Semenovich Tswett invented the technique of chromatography (Latin "colour writing") which he demonstrated by passing extracts of plant tissue through a chalk column to separate pigments by differential adsorption. It was derided at the time but the principle is now used universally for separating and identifying different chemical compounds from samples.


1903 The teleprinter machine (a.k.a. teletypewriter, teletype or TTY) invented by New Zealand sheep farmer Donald Murray. It could punch or read five digit Baudot coded paper tapes (Murray used his own modified version) and at the same time print out the message on a sheet of paper. It de-skilled the telegraph operator's job, since they no longer needed to know Morse code, and at the same time greatly speeding data communications. The teleprinter remained in widespread use until the 1970s when electronic data processing and computer networking replaced many of its functions.


1904 British physicist John Ambrose Fleming invented the first practical diode or rectifier. Although first used in radio applications it became an important device for deriving direct current from the alternating current AC electricity distribution system, revitalising opportunities for DC powered devices, and indirectly, batteries. Fleming's invention of the thermionic valve (tube) could be said to be the beginning of modern electronics. Fleming also invented the potentiometer and the mnemonics known as Fleming's Right Hand Rule and Fleming's Left Hand Rule for remembering the three orthogonal directions associated with the force on the conductors, the electric current and the motion in electric generators and motors.

  • Ri(G)ht Hand Rule for (G)enerators. (F)irst (F)inger = magnetic (F)ield, se(C)ond finger = (C)urrent, thu(M)b = (M)otion.
  • The Left Hand Rule with the same mnemonic is used for motors since one of the factors is reversed.

1904 German engineer Christian Hülsmeyer invented and patented the first practical Radar for detecting ships at sea which he called the Telemobiloscope. It consisted of a spark gap transmitter operating in the frequency range of 650 MHz, whose emissions were focused by a parabolic antenna located on the mast of the ship. The receiving antenna picked up the reflected signals and when a ship was detected a bell was automatically rung. Using continuous wave transmissions, it was unable to measure distances. Its range was limited to about one mile and at the time neither government nor private companies were interested in it.

The idea was eventually taken up by Robert Watson-Watt who, in 1935, developed Radar technology for detecting and tracking aircraft.


1904 Patent granted to Harvey Hubbell in the USA for the "separable attachment-plug", the first 110 Volt AC mains plug and socket. Still in use today.

It is surprising that we had electric lights and motors, three phase power generation and distribution, cathode ray tubes, x-ray and electrocardiograph machines, alpha, beta and gamma rays, and batteries were over one hundred years old, all before the humble plug and socket were invented.


1904 Taking steam from the local volcanic hydrothermal springs, Prince Piero Ginori Conti tested the first geothermal power generator at the Larderello in Italy, using it to power four light bulbs. Seven years later, the world's first geothermal power plant was built on the same site.


1905 The experimental findings of the German physical chemist Julius Tafel on the relationship between the internal potentials in a battery and the current flowing were summarised in Tafel's equation. It is a special case of the more theoretical Butler-Volmer equation (1930) which quantifies the electrochemical reactions in a battery.


1905 H. Piper in the USA patents the hybrid electric vehicle (HEV), a concept introduced in 1899 by Porsche in Germany and in Belgium by Pieper in 1900 and later demonstrated by Krieger in 1903 in France. A top speed of 25 mph was claimed.


1905 The Society of Automobile Engineers (SAE) was established in the USA to promote the professional interests of engineers and manufacturers in the fledgling automobile industry with 30 members headed by American engineer, Andrew Lawrence Riker as its first president and a young Henry Ford as vice president. Riker was the founder of The Riker Electric Vehicle Company producing his first electric car in 1894, using a pair of Remington bicycles as a base. In 1901, his electric-powered "Riker Torpedo" set a world speed record for electric cars that stood for ten years.


The role of the SAE was expanded in 1916, under the leadership of Elmer Sperry, to incorporate the management of the technical standards of American Society of Aeronautic Engineers, the Society of Tractor Engineers, as well as the interests of the power boating industry. Sperry coined the term "automotive" from the Greek, autos (self), and the Latin motivus (of motion), to describe any form of self powered vehicle and the SAE name was changed to the Society of Automotive Engineers to represent the interests of engineers in all types of mobility-related professions.


1905 French physicist Paul Langévin finally explained the cause of magnetism. He suggested that the alignment of molecular moments of the molecules in a paramagnetic substance were caused by an externally applied magnetic field and that the influence of the magnetic field on the alignment becomes progressively stronger with increasing temperature due to the thermal motion of the molecules. He also suggested that the magnetic moments of the molecule, the magnetic properties of a substance, are determined by the valence electrons. This notion subsequently influenced Niels Bohr in the construction of his classic model of the structure of the atom.

Langévin also pioneered the use of high intensity ultrasound for use in sonar applications.


1905 German physicist Johannes Stark discovered the optical Doppler effect. In measurements of the light from the fast moving ions emanating from a cathode glow discharge, known as Canal rays, he determined that the light was was Doppler-shifted by an amount corresponding to the speed of the ion flow.


In 1913 following in the footsteps of Zeeman, Stark, and independently Italian physicist Antonino Lo Surdo, discovered the splitting of atomic spectral lines in electric fields, now called the Stark effect, which is analogous to the Zeeman effect. (See diagram).


Stark was awarded the Nobel Prize in Physics in 1919 "for his discovery of the Doppler effect in Canal rays and the the Stark effect.

An unpleasant character, like his fellow physicist Philipp Lenard he was fanatically anti-semitic. An early member of the Nazi party, when Hitler came to power, he became president of the Reich Physical-Technical Institute and President of the German Research Association where he denigrated "Jewish physics" (relativity and nuclear science) and persecuted Jewish researcers such as Albert Einstein and Max von Laue.


1906 Canadian inventor and eccentric genius Reginald Aubrey Fessenden was the first to transmit and receive voices over radio waves, inventing the so called wireless which made broadcast radio possible. While Marconi's invention was equivalent to Morse's, Fessenden's invention was equivalent to Bell's. Bell superimposed a voice signal onto a DC current whereas Fessenden superimposed the voice signal onto a radio wave (a high frequency AC signal known as the carrier wave) varying the amplitude of the radio wave in a process known as amplitude modulation (AM radio). The term "modulation" was coined by Fessenden.


The radio wave which carried Fessenden's voice signal was provided by a multi-pole rotary radio frequency generator designed by Swedish born American immigrant working at General Electric, Ernst Frederik Werner Alexanderson. In fact a large input to the design came from Fessenden himself who also supervised the project. The generator had a large iron rotor into which were milled 360 teeth providing the magnetic poles rotating at 139 revolutions per second in a multi-pole stator. The output power was 300 Watts with a frequency of 50 kHz at 65 Volts. Promoted by G.E. it achieved fame as the "Alexanderson Generator" but it was not much of an advance on Tesla's 1890 design.

Demodulation, or detection, was by means of a diode device of Fessenden's own design which he called a Liquid Barretter. Similar to a crystal detector, this device rectified the signal, allowing only, either the negative going, or the positive going, cycles of the modulated radio wave to pass. When the output waveform was smoothed to separate the high frequency carrier wave, the result was a continuously varying signal representing the information in the original message. Up to that time, radio detectors such as coherers could only detect the presence or absence of a pulse and had to be reset after every pulse. They were suitable for telegraphy but not for voice transmission. Fessenden's system provided continuous detection of the varying amplitude of the radio wave thus enabling the transmission of voice messages or music.


Fessenden found it difficult to secure financial backing to develop his system. The concept of broadcasting was unknown or not appreciated at the time. Wireless communication was viewed simply as an alternative to the telephone system. The fact that unintended listeners could hear the message was seen as a drawback, not a benefit.

He also offered the rights to the patents which covered his radio broadcasting system to AT&T but they found it was was "admirably adapted to the transmission of news, music, etc." simultaneously to multiple locations, but they decided that it was not yet refined enough for commercial telephone service.


Fessenden was a prolific inventor, with over 500 patents relating to radio and sonar to his name including 5 for the heterodyne principle which made Armstrong rich and famous, but he never got the recognition he deserved. He was neither a good businessman nor an accomplished promoter and lost control of his patents and the possible wealth that flowed from them, dying a bitter and forgotten man.


See also Wireless Wonders.


1906 Patent awarded to American engineer Greenleaf Whittier Pickard working at AT&T for the crystal detector used to detect radio waves. Known as the cat's whisker it used the rectifying properties of the contact between a fine wire and certain metallic crystals, previously described by Braun, in what we would now call a point contact diode. The most common crystal used is naturally occurring lead sulphide, commonly called galena. Pickard also used Silicon Carbide (carborundum) crystals. The same year United States Army General Henry H.C. Dunwoody also patented a crystal detector device based on carborundum.


1907 Leo Hendrik Baekeland a Belgian immigrant in the USA investigating new materials for electrical insulation invented Bakelite, or "Oxybenzylmethylenglycolanhydride" to give it its Sunday name, the first thermosetting plastic which was later used to manufacture everything from telephone handsets to costume jewellery.


1907 American inventor Lee De Forest looking for ways to circumvent Fleming's patent on the diode valve discovered by chance that by adding a third electrode he could use it to control the current through the valve. He was able to use the device to amplify speech and he called it the audion tube (valve). It was the first active electronic device and it was very quickly adopted for use in radio circuits. Based on the success of the audion, de Forest laid claim to the title "The Father of Radio", ignoring the contributions of others.


Now called the triode it was first used as an amplifier but later used also as a switch.


1907 Henry Joseph Round an English radio engineer working for Marconi in New York, wrote to the "Electrical World" magazine with "A Note on Carborundum" describing his discovery that the crystal gave out a yellowish light when 10 Volts was applied between two points on its surface and that other crystals gave off green, orange or blue light when excited with voltages up to 110 Volts. He had inadvertently stumbled across the phenomenon on which the Light Emitting Diode (LED) depends, but there was not enough light to be useful and silicon carbide being hard to work with and Round's discovery was mostly forgotten. The phenomenon was rediscovered by Losev in 1922 and again by Holonyak in 1962.


1907 French physicist Pierre Ernst Weiss postulated the existence of an internal, "molecular" magnetic field in magnetic materials such as iron with molecules forming into microscopic regions he called magnetic domains within which the magnetic fields due to atoms are aligned. Under normal conditions domains themselves are randomly oriented and they have no magnetic effect. However, when they are put in a magnetic field, they tend to align themselves with the magnetic field causing the material to exhibit magnetic properties.

The concepts of paramagnetism and diamagnetism were first defined by Faraday in 1846. Magnetic properties are now understood to be a result of electric currents that are induced by the movement of the electrons in individual atoms and molecules. These currents, according to Ampere's law, produce magnetic moments in opposition to the applied field. The electron configuration in the atoms determines its magnetic properties whether diamagnetic or paramagnetic.

Diamagnetic materials, when placed in a magnetic field, have a magnetic moment induced in them that opposes the direction of the magnetic field. Paramagnetic behaviour results when the applied magnetic field lines up all the existing magnetic moments of the individual atoms or molecules that make up the material. This results in an overall magnetic moment that adds to the magnetic field. Pierre Curie showed that paramagnetism in nonmetallic substances is usually characterized by temperature dependence; that is, the size of an induced magnetic moment varies inversely to the temperature. Weiss's domains apply to ferromagnetic substances like iron which retains a magnetic moment even when the external magnetic field is removed. The strong magnetic effect causes the atoms or molecules to line up into domains. The energy expended in reorienting the domains from the magnetized back to the demagnetised state manifests itself in a lag in response, known as hysteresis. Ferromagnetic materials lose their magnetic properties when heated, the loss becoming complete above the Curie temperature.


1907 After almost 3000 years of use in various forms, the first patent for the process of silk screen printing or serigraphy (from the Latin "Seri" - silk) was awarded to English printer Samuel Simon of Manchester. Although the use of a rubber bladed squeegee to force the ink through the stencil was already known, he is generally credited with the idea of using silk fabric as a screen or ground to hold a tieless stencil. Screen printing derives from the ancient art of stenciling used by the Egyptians as early as 2500 B.C. and refined by the Chinese in the seventh century A.D.. Screen printing is arguably the most versatile of all printing processes, able to print on any surface, with any shape or contour and any size. Although the silk mesh has been replaced by more durable or stable materials such as polyester and perforated metal screens, the technique is still used extensively in the electronics industry today for printing thick film and thin film circuits, for printing the etching patterns for printed circuit board tracks and for the precision application of conductive and other adhesives for making connections and mounting components on surface mounted printed circuit boards as well as for the conventional printing of logos, designs and text on both components and packaging.


1907 French engineer Paul Héroult, developed the first commercial electric arc furnaces for steel making. The first plant was installed in the USA and was designed for batch processing scrap metal. The furnace charge of scrap Iron is heated by very large electric currents passing between Graphite electrodes in contact with the metal. Oxygen is also blown into the melt burning out impurities such as Silicon, Sulphur, Phosphorus, Aluminium, Manganese and Calcium which are converted to slag which can be removed. The operation is a batch process but very fast turn-arounds are possible. It can also be very precisely controlled and started and stopped very quickly if necessary, something which is not possible with blast furnaces. Electric arc furnaces are also cheaper to build and more efficient than conventional blast furnaces.


See also Iron and Steel Making


1908 Construction of the first German pumped storage power plant and of hydraulic research centre in Brunnenmuehle in Heidenheim by Voith Turbo. Since then many more pumped storage systems have been installed throughout the world. The hydraulic battery.


1908 Swiss textile engineer Jacques Edwin Brandenberger invented Cellophane, made from the cellulose fibres of wood or cotton. It is used as a separator in batteries particularly in silver oxide cells.


1909 Danish biochemist Søren Peter Lauritz Sørensen introduced the concept of pH as a measure of the acidity or alkalinity of a solution.


1909 German chemist Fritz Haber developed a method to "fix" Nitrogen (N2) from the atmosphere with Hydrogen (H2) to form Ammonium (NH3), an essential ingredient which, together with nitric acid (HNO3), is used in the production of ammonium nitrate used in fertilisers. The process was scaled up for industrial production by his brother in law and compatriot, Carl Bosch, who was also a chemist working as chief engineer at Badische Anilin und Soda Fabrick BASF.

Nitrogen fixation was developed at a tim e when there was particular concern about the future availability of nitrate fertilisers such as saltpetre (Potassium nitrate (KNO3)) to meet the world's food demands. The main natural source of nitrate fertiliser, "Chile saltpetre" (Sodium nitrate (NaNO3)), known as caliche or guano, was a massive but finite deposit of desiccated bird droppings, accumulated over millennia, in Chile's Atacama desert.

The synthetic, ammonium nitrate fertiliser is an analogue of these other chemical nitrate fertilisers with an ammonium molecule in place of the Potassium and Sodium atoms.


Known as the Haber Process the reaction to produce ammonium is as follows:

Nitrogen from the atmosphere and Hydrogen, originally derived from the electrolysis of water but later by the more efficient steam reforming of natural gas, are combined at high temperature and pressure in the presence of a metal catalyst to form Ammonium

N2 + 3 H2→ 2 NH3

The process is highly energy intensive.


Ammonium nitrate is subsequently produced by the reaction of ammonia gas with nitric acid in the following strongly exothermic reaction:

NH3 + HNO3→ NH4NO3

The nitric acid used to produce the ammonium nitrate is itself also produced from ammonia in a three stage, catalytic combustion (oxidation) reaction known as the "Ostwald Process" patented in 1902 by Russian-German chemist Friedrich Wilhelm Ostwald.

Previously nitric acid had, like fertiliser and explosives, also been produced from saltpetre, and later Chilean saltpetre, by heating it with sulphuric acid in a process discovered in 1648 by German-Dutch chemist Johann Rudolf Glauber.


It is estimated that currently about half the world's food base depends on the use of the Haber process.


Ammonium nitrate is also an essential component in many explosives and in the following years, during the First World War, the Haber process played a major role in Germany's war effort. When the British navy blockaded access to the Chilean saltpetre deposits, Germany was able to substitute ammonium nitrate produced by the Haber process to maintain the production of its munitions. See also Alfred Nobel and Gelignite.

Furthermore, being a fanatical patriot, Haber's war effort included the development of ways to produce poisonous Chlorine and other gases and the method of delivering them to the intended target for which he became known as the "father of chemical warfare".

Haber personally supervised the deployment of these chemical weapons. His wife Clara Immerwahr however, also a chemist, was a pacifist and in 1915 was deeply troubled by the 7,000 appalling casualties when poison gas was first used against British, French and Canadian troops at Ypres and more so by the 70,000 more casualties in the second attack. One week after the second Ypres attack she committed suicide by shooting herself with her husband's army pistol. The following day Haber returned to the front line. Eventually around 650,000 were killed or injured by poison gas during World War I.


Haber was awarded the Nobel Prize in 1918 for his contribution to alleviating the world's food problem by synthesising ammonia from its components. Apparently, his wartime activities did not preclude him from receiving this honour.


Despite his unstinting war effort on behalf of the "Fatherland", being Jewish, Haber fell foul of the Nazi party when Hitler came to power and he escaped to Switzerland in 1933 where four months later in 1934 he died a broken man.

With terrible irony, another of Haber's developments, Zyklon B gas which he had developed in 1924 purely as an insecticide, was used in Nazi concentration camps eight years after his death to exterminate over a million Jews and other victims of the Nazi Holocaust including members of Haber's own extended family.


Carl Bosch continued with a distinguished career in the chemical industry working for I.G. Farbenindustrie A.G. where he eventually became its Chairman. In 1931 he was also awarded the Nobel Prize for Chemistry.


1909 Hermetically sealed wet battery introduced by Beautey in France.


1910 American Robert Millikan determined the charge on the electron by means of his Oil Drop experiment.

In 1897 John S.E. Townsend one of J.J. Thomson's research students, and in 1903 Thomson himself with H.A. Wilson (no relation to the inventor of the "Cloud Chamber"), had measured the charge on the electron with a similar method using a water cloud but their results were inaccurate. Millikan adapted this technique with some ingenious (and some not so ingenious) changes to measure e to within a 0.4% accuracy. A fine mist of oil drops was introduced into a chamber in which the air was ionised by X-rays. From the ionised gas some electrons attach themselves to some of the oil droplets. At the top of the chamber was a positively charged plate with a corresponding negative plate at the base. Charged droplets (with electrons) were attracted upwards to the positive plate while uncharged droplets fell downwards under the influence of gravity. By adjusting the voltage between the plates the electrical field could be varied to increase or decrease the upward force on the charged droplets. The voltage was adjusted so that the charged particles appeared stationary at which point the electrostatic force just balanced the gravitational force. The charge on the electron could then be calculated from a knowledge of the electrical field and the mass of the oil droplet determined by the speed at which it falls. Since the magnitude of the e/m ratio had already been determined by J.J. Thomson, the experiment also allowed the mass of the electron to be determined. From his work we know that the electron has a charge of -1.6 X 10-19 Coulombs and a mass of 9.1 X 10-31 Kg, which is only 0.0005 the mass of a proton. From this we can derive that a current of 1 Amp (1 Coulomb per second) is equivalent to an electron flow of 6.3 X 1018 electrons per second.


Although Millikan's method was beautifully simple, his published conclusions did not truly reflect the results of the measurements made. He was selective in choosing the results, discarding two thirds of the measurements made because they did not support his conclusions, at the same time improving the accuracy of the experiment. He was right, but it took others to prove it conclusively.


Millikan initially studied classics and worked as a teacher and administrator. He did not begin to do research seriously until he was almost forty. He eventually was awarded the Nobel prize for his determination of Planck's Constant.


1910 German physicist and Jesuit priest, Theodor Wulf, measured background radiation at different altitudes with an ionising electrometer of his own design taking measurements on the ground and at various levels up the Eiffel tower. He expected that the radiation would be emanating from the Earth and that the level would decrease with altitude. He found that indeed the level ionisation of the air caused by radiation did reduce with altitude but not as much as he expected. At the top of the tower (330 metres) it was about half what it was at ground level whereas he expected it to halve in only 80 metres. He concluded that the anomaly was due to some form of extra-terrestrial energy entering the Earth's atmosphere.


Wulf published his results in Physikalische Zeitschrift but they were initially not accepted by his peers, however two years later, his experiment was repeated at higher altitudes by Austrian-American physicist, Victor Francis Hess who took improved electrometers up to 5300 metres in a hot air balloon. He confirmed that, at that altitude, the ionisation of the air increased to four times the ground level ionisation and that the radiation causing it was coming from outside of the Earth's atmosphere.

The term "cosmic rays" was subsequently coined by Robert Millikan and Hess, not Wulf, was credited with their discovery for which he was awarded the Nobel Prize in 1936.


See more about Cosmic Rays


1910 William David Coolidge working at General Electric in the USA invented the tungsten filament which greatly improved the longevity of the light bulb.


1910 Neon lighting using the techniques discovered by Plücker and Hittorf and the newly discovered neon gas was patented by French experimenter Georges Claude. Substituting different gases allowed a range of colours to be produced. Although the neon lights were used for advertising in France it was not until 1923 that they were brought to the USA by Packard car dealer Earl Anthony.


1911 Dutch physicist Heike Kamerlingh Onnes of Leiden University is generally credited with the discovery of superconductivity. In fact Kamerlingh Onnes subsequent Nobel award was in recognition of his liquefaction of Helium, not for the discovery of superconductivity. It was his assistant Gilles Holst who first observed that the electrical resistance of Mercury suddenly disappeared when it was cooled to 4 degrees Kelvin, the temperature of liquid Helium. Sadly, the contribution of Holst is long forgotten.


1911 In an address to the Röntgen Society, Scottish engineer Alan Archibald (A.A.) Campbell Swinton described in detail the workings of a proposed all electronic television system using a cathode ray tube scanning an array of photocells onto which the image was projected for the transmitter and another cathode ray tube scanning a fluorescent screen as the receiver. This was at a time when the possibilities of radio communication had just been discovered, radio valves were practically unknown, photocells were most inefficient and vacuum technology was still very primitive. Due to obvious difficulties at the time the system was never constructed. It was left to another Scottish inventor, John Logie Baird to demonstrate the first working television system in 1926. It was an electromechanical system based on the Nipkow disc image scanning system. Although Baird's system was used in 1929 for the first public broadcasts in the UK, electromechanical systems proved to be a dead end.


1911 The experiment on radioactivity that contributed most to our knowledge of the structure of the atom was performed by Rutherford, who with Soddy had previously identified the atomic radiation emitted by Uranium and explained the phenomenon of radioactive transformation. Working at the University of Manchester with his students Hans Geiger (later famous for his "Counter") and Ernest Marsden, Rutherford bombarded a thin foil of gold with a beam of alpha particles (Helium nuclei) and looked at the beams on a fluorescent screen. They noticed that most of the particles went straight through the foil and struck the screen but some (0.1 percent) were deflected or scattered in front (at various angles) of the foil, while others were scattered behind the foil.

Rutherford concluded that the gold atoms were mostly empty space which allowed most of the alpha particles through. However, some small region of the atom must have been dense enough to deflect or scatter the alpha particle. He called this dense region which comprised most of the mass of the atom the atomic nucleus and proposed the model of an atom with a nucleus and orbiting electrons which became known as the planetary model.


Awarded the Nobel prize for Chemistry in 1908 for his work on the structure of the atom, Rutherford considered himself a physicist and claimed "All science is either physics or stamp collecting."

He was famous for making spectacular breakthroughs in atomic physics by devising ingenious experiments which could be carried out using the very limited apparatus available at the time.

He also famously said "The energy produced by the breaking down of the atom is a very poor kind of thing. Anyone who expects a source of power from the transformation of these atoms is talking moonshine" If he didn't believe Einstein, he could have at least profited from advice from another of his students, Niels Bohr (1913 below).


1912 J J Thomson and Frederick Soddy discovered isotopes by observing the different parabolic paths traced by ions of different mass when passing through electric and magnetic fields. Soddy formulated the concept of isotopes, for which he was awarded a Nobel Prize in 1921. It states that certain elements exist in two or more forms which have different atomic weights but which are indistinguishable chemically. Their common chemical characteristics are due the property that they have equal numbers of protons in their nuclei, while their different atomic weights are due to different numbers of neutrons in the nucleus. They used this phenomenon to construct the first Mass Spectrometer (then called a parabola spectrograph), a tool that allows the determination of the mass-to-charge ratio of ions and the identification of the different compounds contained in chemical samples. The Mass Spectrometer has since become an ubiquitous research tool in chemistry.


The presence of neutrons taking up space in the nucleus paradoxically helps to bind the positively charged protons together by diluting the repulsive electrical force between the protons. The neutrons thus act like a glue holding the nucleus together. See the Standard Model (Baryons).


1912 Various alloys of Stainless Steel were independently developed by inventors working in three different companies.

Similar corrosion resistant steels had been investigated in the past by British Robert A. Hadfield in 1892, and later by L. Guillet and A. M. Portevin in France and W. Giesen in England and in 1913 by Philip Monnartz in Germany who had all reported on the relationship between the Chromium content and corrosion resistance, however none of the results of these investigations were converted into commercial products. The newly developed alloys from 1912 enabled the mass production of stainless steel.

  • American metallurgist and automotive engineer, Elwood P. Haynes, mixed Tungsten with Chromium and steel to produce strong lightweight corrosion free alloys which could withstand very high temperatures. He had been working for several years on stainless steel alloys but did not apply for a US patent for his martensitic stainless steel alloy until 1915. ("Martensitic" describes the property of the crystal structure of the steel alloy, in this case a tetragonal, body centred, crystal lattice.) Martensitic steel is brittle with poor toughness, but hardness and toughness can be improved significantly by tempering. It is also magnetic.
  • Haynes' patent was not granted until 1919.


    Haynes also designed and produced one of America's first automobiles, the Pioneer in 1894.


  • British metallurgist Harry Brearley working at Firth-Brown in Sheffield, where he had started his employment as a labourer, was searching for high temperature steels which could better resist the erosion or wear of gun barrels caused by the high temperature discharge gases. Experimenting by alloying the steel with different amounts of Chromium which was known to produce steel with a higher melting point, in 1913 he produced a martensitic steel containing 0.24% by weight of Carbon and 12.85% of chromium which was the first true stainless steel. To examine the grain structure of the steel under a microscope he used acidic etching reagents such as nitric acid to prepare the sample surfaces. He noticed that these samples were also resistant to chemical corrosion and followed up by exposing the samples to common acids such as those found in foods including vinegar (acetic acid) and lemon juice (citric acid). At the time Sheffield was the centre of the UK's cutlery manufacturing industry and Brierley saw the opportunity to produce better rust free knives. Failing to persuade Firth-Brown of this opportunity, he commissioned a local cutler to manufacture his own knives using this new steel. He was thus the first to commercialise stainless steel.
  • Brearley announced his product in the USA in 1915 and applied for a US patent there the same year only to find that Haynes had already registered a patent just a few days before.


    Haynes and Brierley eventually joined forces to jointly commercialise their invention.


  • At the same time, engineers at Krupp in Germany, Benno Strauss and Eduard Maurer patented austenitic stainless steel. (Austenitic steel has a face centred, hexagonal crystal lattice.) They added nickel to the melt to produce a non-magnetic steel which was more resistant to acids, was softer and more ductile and therefore easier to work than Haynes' and Brierley's martensitic steel. Austenitic steels contain a maximum of 0.15% by weight of Carbon, a minimum of 16% Chromium and sufficient nickel and/or lower cost manganese to maintain the austenitic structure over a wide temperature range.
  • Over 70% of modern stainless steel production is austenitic steel.


Subsequently many more variants of stainless steel have been developed with properties optimised for specific applications.


See also Iron and Steel Making


1912 Belgian-born American metallurgist Albert Sauveur working at Harvard University pupliished his book The Metallography and Heat Treatment of Iron and Steel, outlining the microscopic structures of iron and steel, which is considered to be a landmark publication establishing the formal study of physical metallurgy. It provided new insight into the mechanism of tempering steel and the effects of heat treatment on the grain, the strength and toughness of the iron and its alloys.


See also Sorby's Metallography


1912 Charles Kettering in the USA invented the first practical self-starter for automobiles, originally patented by Dowsing in 1896. The subsequent adoption by General Motors of battery-started cars provided the impetus for massive growth in the demand for lead acid batteries spawning new developments and performance improvements. See Willard 1915.


1912 German physicist Max von Laue showed that, when a crystal is illuminated by a narrow beam of X-rays, the rays emerging from the crystal can produce an interference pattern which can be used to determine the physical structure of the crystal. The X-rays which typically have a wavelength less than, or of the same order of magnitude as, the distances between the ordered atoms in the parallel planes of the crystal lattice, create an interference pattern due to the diffraction effects created by the beam passing through the regular spaces in the lattice. This depends on the angle of incidence of the beam and is due to the regular scattering of the beam by certain groups of parallel atomic planes within the crystal and the subsequent combination of the scattered beams with the undeviated beam. Where their paths cross in phase this produces an intense, regularly spaced array spots, on a photographic plate arranged to capture the emerging beam, centered around the central image from the main beam, which passes through undeviated.

Von Laue was awarded the 1914 Nobel Prize for his discovery of diffraction of X-rays.


1912 Australian born Sir William Lawrence Bragg working with his father British physicist William Henry Bragg at Cambridge University investigated X-ray diffraction and formulated Bragg's Law to quantify the phenomenon, thus founding the study of X-ray crystallography. The process is used to analyse crystal structure by studying the characteristic patterns of X-rays that deviate from their original paths because of deflection by the closely spaced atoms in the crystal. This technique is one of the most widely used structural analysis techniques and plays a major role in fields as diverse as structural biology and materials science. X-ray crystallography is used in battery design to analyse alternative chemical mixes and the associated crystal structures to optimise the physical and chemical characteristics of the active chemical contents in the cells. The ability to study the structure of crystals marked the origin of solid-state physics and provided a vital tool for the development of today's semiconductor industry.

The two Braggs were jointly awarded the 1915 Nobel Prize for Physics for the analysis of crystal structure by means of X-rays.


1912 Scottish physicist Charles Thomson Rees Wilson devised the Wilson cloud chamber as a means of making the tracks of ionising radiation visible in order to detect sub atomic particles such as protons and electrons and other energetic particles causing ionising radiation. It consisted of a closed container filled with a cold vapour such as water or alcohol in air. Reducing the presssure suddenly causes the vapour to become supersaturated. When ionising radiation passes through the supersaturated vapour, it leaves a trail of charged particles (ions) that serve as condensation centres for the vapour which condenses around them. The path of the radiation is thus indicated by tracks of tiny liquid droplets which persist for several seconds in the supersaturated vapour. Sudden changes in the direction of the path indicate possible collisions between particles. If strong magnets are used to create a magnetic field in the chamber, the curvature of the paths traced by the particles is an indicator of their mass and velocity. Tight curves indicate light or slow moving particles. Gentle curves indicate heavy or very fast moving paricles.


In the early days, the main sources of charged particles used in experimental studies were, either the natural cosmic radiation shower, or the particles were created by the radiation's ionisation effect, or particle collisions with local matter particles. Methods of capturing these charged particles and recording their tracks were rather hit and miss. The triggering of the expansion of the vapour cloud was initiated by the observer at random instants when it was hoped that some cosmic ray would conveniently pass through the chamber but the sucess rate in capturing such an event was only about 1 in 20.


See an example of Charged Particle Tracks as found in cloud and bubble chambers.


In 1932 British physicist Patrick M. S. Blackett and Italian physicist Giuseppe "Beppo" Occhialini at the Cavendish Lab developed the Counter Controlled Cloud Chamber, a much improved version of the Wilson cloud chamber. They placed two Geiger counters, one above and one below the cloud chamber to monitor the incidence of cosmic rays. If they both fired simultaneously it would indicate that a cosmic ray had passed though the chamber. The signal from the Geiger counters was used to to trigger three further events, the expansion of the vapour cloud, the flashing of a light to illuminate the chamber and the operation of the camera's shutter to record the event on film. It was not necessary for the camera shutter to act precisely simultaneously with the passage of the cosmic ray through the chamber since the ionised track of water droplets remains after the passage of the ray, just like aircraft vapour trails. This system vastly improved the the equipment's data gathering capacity.


A man of many talents, Blackett studied under Ernest Rutherford at Cambridge's Cavendish Laboratory and was himself tutor to graduate student Robert Oppenheimer, father of the atom bomb. He served in the navy in both World Wars seeing active service in WW I and at the Royal Aircraft Establishment (RAE) during WW II where he contributed to Britain's nuclear weapons studies and played a leading role in RAE's operations research capabilities. During peacetime he continued research on particle physics and also geophysics including continental drift. He was made President of the Royal Society in 1965 and elevated to the House of Lords in 1969.

He was awarded the Nobel Prize for Phyiscs in 1948 for his work on cloud chambers and cosmic rays.


In 1952 American physicist Donald A Glaser, working at the University of Michigan, invented the Bubble Chamber particle detector which overcame some of the drawbacks of cloud chamber detectors. Similar in principle to the cloud chamber, but with a denser vapour to detect the tracks of the ionised particles. Like the cloud chamber it used an electromagnet around the chamber to provide the magnetic field for deflecting the charged particles. In the original cloud chambers high energy and light particles pass quickly through the thin supersaturated cooled vapours without significant decay or interaction with the vapour's nuclei so that detecting the tracks of such particles could require a chamber over 100 metres in diameter. Furthermore the cycle of expansion and recompression could take up to a minute.


Glaser's bubble chamber used instead the vapour from a superheated transparent liquid to detect the particles. The denser vapour contained many more atomic nuclei with which the particles could react and hence the chamber could be much smaller. It is said that he was inspired by the rise in bubbles rising through beer when the pressure is suddenly reduced as the bottle cap is released.

The working fluid used in his first chamber was diethyl ether held at just below its boiling point in a pressurised vessel. The higher pressure essentially increases the liquid's boiling point. If the pressure is suddenly reduced, charged particles shooting through the superheated liquid will create a disturbance, triggering a local boiling process, as they ionise the nuclei in the liquid along their paths, leaving a trail of bubbles which can be photographed through the transparent liquid. Pressure had to be restored quickly to keep the liquid below its boiling point otherwise the liquid would boil uncontrollably. A large piston used to cycle the pressure in the chamber allows the pressure to be changed very rapidly. Hydrogen, which has a very low boiling point of -253°K, later became the preferred working fluid because its simpler molecular structure enabled the creation of cleaner, more consistent tracks.

The bubble chamber's decompression and expansion cycle was of the order of one second and could be synchronised with the operating cycle of the particle accelerator which generated the particles enabling many more tracks to be photographed in a given time.


Glaser was awarded the 1960 Nobel Prize in Physics for his invention.


In 1968 the multi-wire proportional chamber (MWPC) particle detector, also called a Drift Tracker, was invented by Polish born, French citizen Georges Charpak working at CERN. It could record the tracks of up to a million particles per second and was fast enough to reveal the interactions of extremely short-lived subatomic particles. It also enabled direct computer analysis of these recordings. These were major advances on the capabilities of cloud and bubble chamber detectors used up till then which could only record one or two trajectories per second.


The MWPC was constructed from an array of parallel fine wires typically about 1 mm apart and held at high electric potential forming individual anodes. These were positioned between two parallel conductive plates, each at a distance of about 5 mm from the wires, held at ground potential which formed a single cathode. See a diagram showing the construction of the Drift Tracker. This structure was mounted in a chamber filled with a gas such as argon or methane which was easily ionised. Each anode wire had its own amplifier connecting it to a computer which identified and stored the position in the chamber of the particle interaction and the magnitude and timing of any pulses induced on each line for subsequent analysis. The drift chamber is like a large number og geiger tubes.

Variations of the design included the provision of a magnetic field in the chamber to deflect the particles enabling their momentum to be calculated, the elimination of the parallel cathode plates by using the walls of the chamber as the conducting cathode, and using two anode wire arrays at right angles to eachother to achieve more accurate two dimensional coordinates of the interactions.

In operation, charged particles pass through the chamber causing the gas to ionise and the electons (or negative ions) to drift towards the nearest positive anode wire. Because of the high electric field, this electron drift builds into an avalanche, while at the same time positive ions drift towards the cathode plates resulting is a pulse of current, with an amplitude proportional to the number of local electrons captured, on the anode line connected to the computer.


During World War II, Charpak served in the French Resistance but being Jewish he was arrested by the Vichy authoriities. He survived imprisonment in the Dachau concentration camp returning to France after the war where he gained the French degree of Civil Engineer of Mines and later a PhD in Nuclear Physics.

In 1992 he was awarded the Nobel Prize in Physics for his development of the multi-wire proportional chamber.


1912 Twenty one year old Russian immigrant David Sarnoff was working as a telegraph operator at the Marconi Wireless station in New York when SOS signals from the sinking S.S. Titanic came in from the frozen North Atlantic. Staying at his post relaying messages for 72 hours straight brought him instant fame. The experience convinced him of the potential of radio and he went on to found the Radio Corporation of America RCA.


1912 American college student Edwin Howard Armstrong invented the regenerative or "feedback" radio receiver which he subsequently patented in 1914. By using positive feedback he dramatically increased the gain of the valve amplifiers used in radio circuits improving their sensitivity. Lee De Forest subsequently claimed credit for this invention because it used his audion valve. See also Frequency Modulation.


1912 Deaf American Henrietta Swan Leavitt, hired by Harvard College Observatory to catalogue the brightness of stars in a system known as the Magellanic Clouds from thousands of glass photographic plates, noticed that the changing brightness of Cepheid Variable stars was related to the length of their periodic cycles of variation, (typically between 1 and 50 days). Since all the stars in the Magellanic Clouds are approximately at the same distance from the Earth, she deduced that their relative brightness can be directly compared. She published her conclusion that the intrinsic brightness of Cepheid stars is directly proportional to the time to complete a full pulsation cycle of their brightness, known as the period-luminosity relation. Thus, once the period is known, the brightness can be inferred. Bright objects of a known luminosity such as the Magellanic Cloud Cepheids are called standard candles.


The first pulsating star was discovered in 1784 by English astronomer Edward Pigott who detected the variability of a star Eta Aquilae however it was the discovery a few months later of a second pulsating star Delta Cephei, by Dutch born, profoundly deaf, amateur astronomer living in England, John Goodricke, which gave its name to a new class of variable stars. The variation in brightness of Cepheid stars occurs as the supply of Hydrogen fuelling the star's energy creation diminishes creating an imbalance between the inward gravitational pressure and the outward pressure, due to the nuclear fusion reactions, causing the star to expand and contract.


Leavitt died of cancer at the age of 53 before the far reaching implications of her discovery on our understanding of the Universe were realised by Harlow Shapley and Edwin Hubble and she was not recognised in her lifetime.


1912 Some comments about science and philosophy by British mathematician and philosopher Bertrand Russell

  • Science is what we know, and philosophy is what we don't know.
  • Facts have to be discovered by observation, not by reasoning.
  • The point of philosophy is to start with something so simple as not to seem worth stating, and to end with something so paradoxical that no one will believe it.

See also Gilbert's rational thought (1600).


1913 Young English Physicist Henry Gwyn Jeffreys Moseley working with Rutherford at Manchester explained for the first time the fundamental pattern underlying the periodic table. Using X-ray spectra obtained by diffraction in crystals bombarded with cathode rays, he found that the heavier the atomic mass of the element, the shorter was the wavelength and the more penetrating were the X-rays emitted, indicating a systematic relationship between the wavelength of the X-rays and the atomic mass of the element. (X-rays are generated when a focused electron beam accelerated across a high voltage field bombards a stationary or rotating solid target). He determined that the positive charge on the nuclei of the atoms always increases by 1 in passing from one element to the next in the periodic table and he called this the atomic number. Moseley's discovery showed that atomic numbers were not arbitrary as had previously been thought, but followed an experimentally verifiable pattern. He predicted the existence of two new elements, now known to be radioactive, non-naturally-occurring Technetium and Promethium, by showing that there were gaps in the sequence at numbers 43 and 61.


Like many other patriotic youths at the time Moseley volunteered to serve in the First World War and was killed at Galipoli at the age of 27.


1913 Patent filed by Arthur Berry for etched printed circuits used in heaters. Similar subtractive techniques were also proposed by Littlefield and E. Bassist using photoengraving and electrodeposition of copper but the ideas do not appear to have caught on.


1913 Henry Ford introduced the moving conveyor line for assembly operations. Also called the paced production line, in conjunction with better materials flow to each work place, it enforced work rate and line discipline enabling major efficiency gains to be achieved. It was not popular but the huge reductions in assembly time enabled Ford to pay higher wages. Paced production lines are now the norm for producing high volumes of high labour content products.


1914 American astronomer Vesto Melvin Slipher at the Lowell Observatory in Arizona presented the initial results of his studies of the red shift of light spectra from distant galaxies to the American Astronomical Society showing that out of 15 galaxies, 11 were clearly red-shifted. Taking into account the Doppler effect, this was the first indication of the expanding universe.


1915 American physicist Manson Benedicks discovers the rectifying properties of germanium crystals, a discovery that will ultimately lead to the development of the "semiconductor chip".


1915 Improvements to automotive lead acid SLI battery reliability and safety introduced by Willard Storage Battery Company including rubber plate separators and shortly afterwards hard rubber cases. Previously lead acid battery designs had been diverse and unreliable with cases and separators constructed from a variety of materials such as wood dipped in asphalt, waxed leather, ceramics and glass. For the next 30 years or more, until the availability of easily moulded plastics, the construction of automotive batteries was based on design concepts introduced by Willard.


1915 During the First World War, batteries became essential for powering torches and particularly military field telegraph equipment but the source of pyrolusite from which is derived the manganese dioxide, needed for Leclanché cells, was controlled by the Germans and an alternative had to be found. In response, French physicist Charles Fery developed an alternative air depolarising battery. The cathode was a large porous carbon pot, only partially filled by the zinc anode and the electrolyte was open to the air. The design essentially diluted the polarisation effect of the hydrogen generated and promoted contact with the oxygen in the air for recombination into water. It was not very efficient but although it was not perfect it served its purpose and 1.5 million were produced. It could be considered the forerunner of the Zinc-Air battery.


1915 Western Electric engineer Edwin H. Colpitts patented the push-pull amplifier. The design used a phase splitter to separate the positive going part of the wave form from the negative going part and amplified the two parts in separate valves (tubes). After amplification the two parts were recombined to reconstitute the waveform. Since two valves were used the design permitted higher power outputs to be achieved and at the same time, because the voltage swing in each valve was lower, the circuit provided linear amplification free from distortion.


1915 American engineer Ralph Vinton Lyon Hartley working at Western Electric invents the variable frequency Hartley oscillator which can be tuned using a variable capacitor. Oscillation is induced by positive feedback around a valve (tube) amplifier. The frequency of oscillation is determined by two inductors (or a tapped single inductor) and a single capacitor. Modern versions use transistors or operational amplifiers to provide the amplification.


The same year fellow Western Electric engineer E. H. Colpitts (see above) invented an alternative oscillator with slightly better frequency stability. It is the electrical dual of a Hartley oscillator using two capacitors and one inductor determine the frequency.


1915 A busy year for Western Electric, another of their engineers, American John Renshaw Carson published a mathematical analysis of the modulation and demodulation process and filed a patent for single-sideband and suppressed carrier amplitude modulation techniques which was eventually granted in 1923. His theory paved the way for the development of frequency division multiplexing.


1915 American astronomer Harlow Shapley working at the Mt Wilson Observatory realised that Leavitt's period-luminosity relation could be used to estimate the distance between different galaxies by comparing the relative brightness of their Cepheids and using the inverse square law to calculate the distance between them. This provided a method of estimating the distance to far off galaxies by comparing the brightness of the Cepheids in the distant galaxy with the brightness of Cepheids in the Magellanic Clouds.

To provide absolute cosmic distances however he needed a reference and this was provided by Danish astronomer Ejnar Hertzsprung who in 1913 pioneered a statistical method to calibrate the distance to the Magellanic Cloud.


1915 German mathematician Emmy Nöther established a principle, known as Nöther's Theorem that "for any conserved physical quantity, such as energy or momentum, the physical laws describing the behaviour of this quantity are invariant to one or more continuous transformations". This principle is equivalent to a symmetry. In other words: the symmetry of a physical system is a physical or mathematical feature of the system (observed or intrinsic) that is preserved or remains unchanged under some physical transformation. It means that the laws of physics are invariant (do not change) when we change our viewpoint, either from one location to another or from one time to another. Thus the laws governing the conservation of energy or momentum are invariant to transformations in time or space. They are the same yesterday, today and tomorrow and are also the same at any point in space. Another example of a symmetry in physics is that the speed of light has the same value in all frames of reference no matter what the viewpoint.

Symmetries may be "global" which implies that the transformation must be carried out everywhere simultaneously and uniformly, or more powerfully, they may be "local" which means that a symmetrical transformation can be carried out independently at any location and so have potentially wider applications.


The importance of Nöther's theorem, not published until 1918, is that conservation laws reflect the many deep symmetries of nature so that her theory underlies all of physics. In particular it has been an invaluable tool in developing an understanding of the mysterious world of particle physics. In the simplest case of the electromagnetic force, the electric charge is the conserved quantity and in addition to the space-time invariance, electromagnetic transformations are also invariant to the phase of the electromagnetic wave. This phase invariance is also known as the gauge invariance.

Possible symmetries include, the conservation of mass, energy, charge, spin (angular momentum), isospin and even strangeness of any particle or group of particles in a closed system. In a symmetrical system the total value of each of the relevant parameters must be the same before and after an event. Transformations may include collisions beween particles and particle decay as well as changes in time and space. Nöther's principle enabled new theories to be evaluated and confirmed or rejected. It also enabled the existence of several particles to be predicted before they were experimentally observed. If an expected symmetry appears to be "broken", it implies that the underlying assumption is wrong or there must be some new force or a particle that has not yet been discovered.


In 1919 German mathematician Hermann Weyl attempted (unsuccessfully as it turned out) to unify the rules applying to electrodynamic and gravitational forces by seeking common symmetries which could apply to both forces. This required the group of symmetries characterising the transformations to be applicable in all locations. He likened this specification to the railway network which required all its trains and tracks to have the same gauge, (that is the distance between the rails), at any point in the network, which he called gauge invariance. Based on this analogy, he called the applicable local symmetries, gauge symmetries, a name and requirement which have endured even though his unification theories did not.


Unfortunately quantum gauge symmetries often relate to quantities which can not be measured directly and in the 1950s developing quantum field theories of the strong and weak forces became a matter of identifying the conserved quantity and hence the appropriate gauge symmetry.


1916 American chemist Gilbert Newton Lewis advanced Frankland's theory of valency and established the basis of the theory of chemical bonding by proposing that chemical bonds are formed between the atoms in a compound because electrons from the atoms interact with each other. He had observed that many elements are most stable when they contained eight electrons in their valence shell and suggested that atoms with fewer than eight valence electrons bond together to share electrons and complete their valence shells.


By contrast, the compounds used in batteries consist mainly of metal and non-metal atoms held together by ionic bonding in which electrons are completely transferred from one atom to another. The atoms losing a negatively charged electrons form positively charged ions, while the atoms gaining electrons become negatively charged ions. The oppositely charged ions are attracted to each other by electrostatic forces which are the basis of the ionic bond. This explains the theory of dissociation proposed by Arrhenius in 1884.


1916 During military service as an officer of the Tsarist army fighting on Germany's Eastern Front during the World War I, Russian engineer and mathematician Aleksandr Ignatyevich Shargei from Kiev, (now part of Ukraine), filled four notebooks with his ideas about interplanetary flight. After the war and the 1917 Russian revolution he was at high risk of arrest by the Bolshevik authorities as an enemy of the people, so he adopted the identity of a dead man, Yuri Vasilievich Kondratyuk, and in 1925, he self published his ideas in a book, The Conquest of Interplanetary Space under Kondratyuk's name. In the book he outlined the concept of Lunar Orbit Rendezvous (LOR) using a modular spacecraft consisting of a propulsion unit carrying a small landing craft to reach and orbit the Moon. The propulsion unit would remain in orbit around the Moon while the smaller landing craft journeyed to the surface and back to the propulsion unit which would then return to Earth leaving the landing craft behind. This strategy was eventually proposed by John C Houbolt for the Apollo Moon Mission.

Kondratyuk also suggested using a gravitational slingshot trajectory to accelerate a spacecraft and he included detailed calculations of the trajectory to take a spacecraft from Earth orbit to lunar orbit and back to Earth orbit, a trajectory now known as "Kondratyuk's Loop" or more commonly the Free Return Trajectory. See Apollo Trans Lunar Injection.

In 1932 Kondratyuk had the opportunity to meet Sergei Korolev, then head of the GIRD (Soviet Rocket Research Group). Korolev offered Kondratyuk a position on his staff, but he declined, fearing that the scrutiny he would come under by the NKVD (Russian Secret Police) would reveal his true identity.


1916 Edwin Fitch Northrup working at Princeton University invents the coreless high frequency induction furnace.


1916 Metallurgist Jan Czochralski, born in Kcynia, Western Poland, then part of Prussia (Germany), working in Berlin accidentally discovered a method of drawing single crystals when when he absent mindedly dipped his pen into a crucible of molten tin rather than his inkwell. On pulling the pen out he discovered that a thin thread of solidified metal was hanging from the nib. Experimenting with a capillary in place of the nib, he verified that the crystallized metal was a single crystal and went on to develop the technology for producing large single crystals, still a fundamental process for semiconductor fabrication today.

At the request of the president of Poland, in 1928 he moved back to Poland to take up the post of Professor of Metallurgy and Metal Research at the Chemistry Department of the Warsaw University of Technology where he published many papers. However after World War II he was unjustly accused of aiding the Germans during the war and stripped of his professorship. Although he was later cleared of any wrongdoing by a Polish court, he returned to his native town of Kcynia where he ran a small cosmetics and household chemicals firm until his death in 1953.


The Czochralski (CZ) method of growing single crystals was adopted in 1950 by Bell Labs and is used today in 95% of all semiconductor production.


1917 American engineer George Ashley Campbell awarded patents for low pass, high pass and band pass filters consisting capacitors and inductors. These passive electric wave filters had already been employed for several years in the telecommunications industry for signal conditioning, selection and tuning and similar designs had been developed in Germany by K W Wagner in 1915.


1917 It is popularly believed that Rutherford had achieved the alchemist's dream of transmuting matter, and had also split the atom which led to the work on nuclear fission. Unfortunately, parts of this belief are said to be a myth, resulting from re-interpreting the facts by others as new information came to light. He had not split the atom but he had at least carried out the first man-made nuclear reaction. It was this experiment which prompted many others to investigate the possibility and potential of further nuclear reactions.

In 1917, Rutherford had bombarded Nitrogen gas with naturally occurring alpha particles (Helium nuclei) from radioactive material and obtained atoms which it is claimed were an isotope of Oxygen accompanied by an emission of positively charged particles with a higher energy. He had however created the world's first nuclear reaction, albeit a weak one. At the time he had believed that the positively charged particles emitted were Hydrogen nuclei which he had isolated for the first time he but he had not identified the residual nucleus which remained. He did not publish the results of the experiment until two years later in 1919 when he called the positively charged particles protons.


Some suggest that in fact he had indeed "fragmented" the atom and that the Nitrogen had been transmuted into an isotope of Oxygen. Others speculated that the alpha particle and Nitrogen nucleus had stuck together, with a proton fragment being emitted.

Between 1921 and 1924, Patrick Blackett, one of Rutherford's research fellows repeated the experiments with more sensitive equipment and identified and proved the transmutation of Nitrogen to Oxygen and published his results in 1925.


Rutherford continued his investigations with Cockcroft and Walton who started work in 1928 on a controlled source of high energy particles which enabled them probe deeper into the atomic structure.


1917 American engineer Gilbert Sandford Vernam working at AT&T Bell Labs invented an unbreakable encryption system which he patented in 1919. Named the Vernam Cipher, it was designed to work on teleprinter communications and used the teleprinter's five digit Baudot Code. Using XOR logic, it mixed the teleprinter's plaintext message, character by character, with a key of the same length consisting of a random string of letters to produce the ciphertext. To decipher the message, the same key would again be combined character by character with the ciphertext to produce the plaintext. A simple but powerful idea.

See an example of XOR logic and Vernam encryption showing how it works.

The letter string which formed the encrytion key was known as a One-Time Pad (OTP) and was shared between the sender and the recipient. So long as the key's letter string was truly random and the same length or longer than the plain text message and the key was kept completely secret and never used more than once in whole or in part, the Vernam code could not be cracked.


1918 German electrical engineer, Arthur Scherbius applied for a patent for a cipher machine, based on wired rotors, to encrypt commerial communications. A similar machine was also patented a year later by Dutch inventor Hugo Koch and Scherbius' company purchased the rights to the Koch's design. The design was marketed under the name Enigma and went through several iterations until it was adopted by the German Navy in 1925 followed in 1928 by the German Army who added further security features and by 1935 it was adopted by all three armed services including the German Air Force. The code was considered to be unbreakable by the Germans.

See details of Enigma's Secrets and how the code was broken.


By 1932, three young Polish mathematicians, Marian Rejewski, Henryk Zygalski and Jerzy Rózyki, had broken the code and determined the wiring of the German military Enigma machine. In 1938 they introduced the Bomba, a mechanical device exploiting a weakness in the key sent with the message which was used to set up the recipent's machine. This allowed the codebreakers to determine the machine set-up for themselves and hence decode the message.


By 1939 the German military had changed procedures and added enhancements to their machine which rendered the Polish methods useless. On the eve of World War II the Polish codebreakers shared their experience with British codebreakers from Bletchley Park, Alastair Denniston, head of the head of the Government Code and Cypher School (GC&CS) and Dillwyn "Dilly" Knox, its Chief Cryptographer, together with French cryptanalysts who had helped the Poles by passing on information about Enigma gleaned from a German traitor named Hans Thilo Schmidt (code name Aché).

Denniston was a linguist, fluent in German and Knox was a classics scholar who had translated papyrus fragments at the British Museum. Realising that breaking the new Enigma code was not a linguistic problem but a mathematical problem, the task was assigned to mathematicians Alan Turing and Gordon Welchman who developed the Bletchley Bombe to automate the decryption process. Engineer Harold "Doc" Keen, working at the British Tablulating Machines Company, BTM, designed and delivered the first working model in 1940 and this was used successfully throughout the rest of the war to decrypt German military communications without the Germans realising that their system had been compromised.


While at Cambridge, Turing had conceived the possibility of a "Universal" Turing Machine and this brought him to the attention of the government's spy masters who recruited him to Bletchley Park in 1939. For his code breaking achievements there, Winston Churchill and many others, claimed that Turing's work had shortened the Second World War by at least two years.

Turing believed that computing machines could be made to be intelligent and was an early pioneer of Artificial Intelligence (AI). In 1950 he published a paper entitled "Computing Machinery and Intelligence" in which he proposed The Imitation Game which later became known as the Turing Test. It was designed to determine whether machines can think and whether a computer could learn, and possibly match, the intelligence of a human being. It involved two contestants separated from each other, a human and a computer or machine who would be asked a series of identical questions by an interrogator who would evaluate their answers. Conversation would be limited to text-only channels to avoid being influenced by voice intonation. The contestants were not allowed to know each other's answers and the interrogator had to decide from the answers which contestant was the human and which one was the machine. If the evaluator was unable to reliably tell the machine from the human, the machine was said to have passed the test. Up to now, no computer has passed the test.


After the war, Turing's personal life took a disastrous turn for the worse when he left his post in the British Intelligence Service at Bletchley Park to take up a position at Manchester University as Reader in Mathematics and subsequently Deputy Director of the Computing Lab. Living alone he struck up an acquaintance with Arnold Murray, a 19 year-old, unemployed, down and out, gay man whom he invited to stay overnight at his home. Unfortunately during the stay Murray stole £10 from Turing's wallet which Turing duly reported to the police. Investigating this petty theft, the police switched their attention to Turing himself when he revealed that he had had a male lover in his home and instead they charged Turing with the crime of "acts of gross indecency". At the time (1952), homosexuality was illegal in the UK and punishable by life imprisonment and all gay men were regarded as security risks and open to blackmail.

Consequently Turing's security clearance was withdrawn, cutting him off from his life's vocation. In addition, faced with the more serious prospect of imprisonment, and possibly the loss of his University mathematics and computing posts which gave him access to one of the world's only computers, Turing accepted the alternative punishment of "chemical castration" - hormone treatment that was supposed to suppress his sexual urges. It did however cause him to grow breasts.

Depressed by these issues, Turing was found dead in 1954 at the age of 41 having eaten an apple laced with cyanide. It was generally thought that this was suicide, however some thought it was an accident.


Often underestimated is the contribution of Welchman to the cracking of the Enigma code. Working on the design of the Bombe, he introduced the diagonal board which improved the Bombe's mechanical processing efficiency, but more importantly he also pioneered the use of traffic analysis to focus more quickly on identifying key clues and other useful information, not necessarily part of the message, which could be gleaned directly from the encrypted text or the German radio transmissions. This included the radio frequency, call signs, traffic volume, who is contacting whom, message sender, message destination, standardised descriptors or headers, time and date information. The location of the message originator and whether the sender was stationary or moving could also be determined from the direction of the radio signals from the German transmitters as received by Bletchley's remote receiving stations. The point of intersection of bearings from two or more receiving stations indicated the position of the transmitter. These information sources provided strategic insight into the enemy's activity as well as speeding up and simplifying the decryption process.

Traffic analysis is now a fundamental tool of modern national intelligence and security services, while similar techniques are employed by commercial enterprises who use consumer data mining to gain a competitive edge.


See details of How Enigma Worked and How the Code was Broken.


1918 Edwin Howard Armstrong patented the superheterodyne radio receiver solving the problem of providing a wide tuning range and high selectivity between stations. This was achieved by using a variable frequency local oscillator or frequency changer to shift the frequency of the signal (carrier wave plus sidebands) from the desired transmitter to a convenient fixed intermediate frequency (IF). Tuning and amplification take place in a separate narrow band IF amplifier which only needs to be tuned to a single frequency simplifying the design considerably as well as improving performance (selectivity).


German engineer Walter Schottky also independently invented a superheterodyne radio receiver the same year.

A simple version of the idea had been used by Fessenden in 1901 but he had not developed it. He did however give the circuit its name from the Greek heteros (other) and dynamis (force). Until the digital age and phase locked loops, the superheterodyne principle was used in 98% of all radios world wide.


1918 Max Schoop produced high current printed circuit boards with heavy tracks for high power vacuum tube circuits using metal deposition by flame spraying through a mask. While successful, like Berry's ideas before him, they were not taken up by others.


1919 The flip-flop or bi-stable latch circuit a basic building block in all digital computers and logic circuits was invented by British engineers William Henry Eccles and Frank Wilfred Jordan working at the government's National Physical Laboratory. Originally implemented with triodes, now with transistors (diagram), it can remember two possible conditions or states and thus is able to store a single bit of information or a binary digit, thus enabling computers to count. This was the circuit chosen in 1958 by Robert Noyce for the first planar Integrated Circuit.


Eccles and Jordan were not Americans as reported on many US based web sites. Another internet myth. Eccles did pioneering work on radio propagation and was a Fellow of the Royal Society (FRS). He rose to be President of the Physical Society from 1928 to 1930, and President of the Institute of Electrical Engineers (IEE) in 1926. Jordan faded into obscurity.


1919 The Electret, the electrostatic equivalent of the permanent magnet, was discovered by Mototaro Eguchi in Japan. Electrets are dielectric materials that have been permanently electrically charged or polarised. They are produced by heating certain dielectric materials to a high temperature and then letting them cool while immersed in a strong electric field. The materials are composed of long molecule chains, each with an electric dipole moment which can be formed into electrostatic domains similar to the magnetic domains found in magnets. Electret foils are commonly used in microphone transducers since they do not require a polarising voltage to be applied as in "condenser" microphones.


1919 The tetrode valve was invented by Walter Schottky who discovered that by placing a grid between the anode plate and the control grid of a triode valve, the grid-plate capacitance was reduced to almost one-hundredth of that in the triode. The second grid acted as a screen to prevent the anode voltages from affecting the control grid and eliminated instability (oscillation) caused by anode-grid feedback in the triode valve.


1919 American mechanical engineer and patent lawyer Elliott J. Stoddard patented an "air" engine similar to the Stirling engine. It used two large heat exchangers for the heat source and sink and a valve arrangement to shorten the flow of the working fluid to eliminate dead space and hence improve efficiency. Later versions used alternative working gases such as Helium and Hydrogen.


1919 Alexander McLean Nicholson working at Bell Labs (then Western Electric) on growing Rochelle-salt piezoelectric crystals for use in loudspeakers, microphones and oscillator circuits, filed patents on his work, but the only development leading to commercially successful telephone technology products was the crystal oscillator.

When a varying signal is applied across a piezoelectric crystal it expands and contracts in sympathy. The crystal oscillator circuit sustains oscillation by taking a voltage signal from the crystal, amplifying it, and feeding it back to the crystal which resonates at a certain frequency determined by its cut and size.


Independent of Nicholson and working contemporaneously with him on circuits using piezoelectric crystals was academic W.G. Cady and though they both applied for patents, after litigation, judgement was given in favour of Nicholson, backed by Bell Labs, as the originator of the crystal oscillator.


Today more than 2,000,000,000 quartz crystals are produced annually for use in electronic circuits needing precise frequency control including radio tuners, mobile phones, computers, clocks and watches.


1920 The first regular commercial radio broadcasts by KDKA in Pittsburgh. By the end of 1922 a further 563 licensed A.M. radio stations were operating.


1920 Cambridge scientist Francis William Aston investigating atomic masses using a mass spectrometer discovered that four Hydrogen nuclei (4 protons) were heavier than a Helium nucleus which has the same number of nucleons (2 protons and 2 neutrons). He determined that in general, when atoms are packed together in the nucleus they lose some of their mass. He described this loss as the difference between the element's atomic mass and the mass of its constituent protons and neutrons which he called the mass difference, now called the mass defect.

He also reasoned that something was holding the positively charged protons together in the nucleus which overcame their mutual repulsion preventing them from flying apart. We now call this the binding energy which he thought would be prodigious.

British astrophysicist Arthur Eddington speculated that Aston's mass difference could represent the equivalent amount of energy released when Hydrogen atoms and neutrons were fused together into a Helium atom as predicted by Einstein's equation, E=Mc2, and that this could explain the source of the Sun's energy. In 1939 Hans Bethe explained in detail how this may come about.

We now recognise the mass difference as corresponding to the binding energy associated with the element. It is equivalent to the energy needed to separate an element into its constituent nucleons.


Continuing his spectrographic studies with more elements Aston plotted a chart of their mass differences. Elements at the ends of the periodic table (Hydrogen and Uranium) had high mass differences, reducing towards a minimum for elements near the middle if the table (Iron and Nickel). The ratio between the mass defect and the atomic mass is known as the packing fraction. High packing fractions indicate high mass differences and corresponding low packing density or loose packing of its nucleons and hence low stability of the atom, whereas low packing fractions indicate low mass differences, dense packing of the nucleons and high stability. Aston's chart of mass differences is the mirror image, about the horizontal axis, of the chart of the binding energy of the elements.


See also Cockroft and Walton's work on this topic


Aston won the Nobel Prize for Chemistry in 1922.


1920 Exploring ways to circumvent De Forrest's patents on the triode amplifier or audion tube, American electrical engineer Albert W. Hull, working at General Electric Research Labs, invented the Magnetron.

Attempting to control the anode current by using a varying magnetic field, rather than by electrostatic means he constructed a vacuum tube containing an anode in the form of a cylindrical tube and a rod shaped cathode contained within the tube an on its centre line. Magnets at each end of the cylinder were used to provide an axial magnetic field along the length of the electrodes. See diagram of Hull's Magnetron. Electrons emitted by the cathode would be attracted directly towards the anode by the radial electric field between the two electrodes but would actually follow a curved path outwards towards the anode due to the influence of the magnetic field. At low magnetic field strengths the curved path of the electrons across the gap between the cathode and the anode would have a large radius. As the field strength was increased the current would remain constant but the radius of the curve would reduce until it reached a critical point beyond which the electrons would not reach the anode but would instead curve back to the cathode resulting in the current being suddenly cut off.

The device was thus not successful as an amplifier but it did find use as a low power oscillator taking advantage of the instability caused around the point of critical magnetic field strength and the resonant properties of the electrode structure. In 1924 however Czech physicist August Zácek and German physicist Erich Habann independently discovered that the magnetron could generate radio waves of 100 megahertz to 1 gigahertz.


See also the Cavity Magnetron.


1921 12% of British homes wired for electricity.


1921 American physicist and engineer Walter Guyton Cady working at Wesleyan University in Middletown, Connecticut submitted a paper to the Proceedings of the Institute of Radio Engineers describing for the first time, the principles of the crystal controlled oscillator circuit. He foresaw their use as frequency standards and filed two fundamental patents in 1920 and 1921.

Radio transmission and reception equipment depend on highly stable, precision quartz oscillators. Before that time, an electronic oscillator used a valve (vacuum tube) amplifier with a tuned (resonant) circuit, consisting of capacitors and inductors, in a positive feedback loop to sustain and control the frequency of oscillation. Cady's circuit made use of the mechanical resonance properties of piezo-electric crystals. It used three valves and a four terminal piezoelectric crystal resonator in the feedback loop eliminating the capacitors and inductors and and achieved a stability 100 times better than conventional resonant circuits.

In 1923 Cady shared his thoughts with, Harvard professor G. W. Pierce, who contacted his patent lawyer and immediately set to work to improve on Cady's design.


Cady also lost out to Bell Labs researcher A.M. Nicholson whose patent for a crystal oscillator was given priority.


1921 American inventor Thomas Midgley working at General Motors (GM) discovered a fuel additive tetraethyl lead which prevented pre-ignition, known as knocking, in internal combustion engines solving a major problem in the automobile industry. It was launched the following year and quickly adopted by petrol (gasoline) companies worldwide who switched to leaded fuel. Unfortunately lead in certain forms is toxic and for sixty years, almost unchallenged, it polluted the atmosphere, killing or disabling many in the industry who had too close a contact with it, until consumer pressure forced the automakers to begin producing cars that ran on lead free fuel.

It is said that Midgley himself suffered from the effects of lead poisoning.


In 1928 GM assigned Midgley a new task, to find a safe alternative to the toxic refrigerants used in refrigerators and air conditioners. (See Refrigerators) He came up with a range of colourless, odourless, nonflammable, noncorrosive gases or liquids known as chlorofluorocarbons (CFCs) with boiling points suitable for vapour compression refrigerators and personally demonstrated the benign properties of these wonderful new gases by inhaling a lung-full and exhaling it onto a candle flame which was extinguished. Decades, and untold millions of refrigerators, later it was discovered that CFCs were destroying the atmosphere's Ozone layer and jeopardising the ecosystems of the planet.


Never in the history of mankind had so much damage been done to the atmosphere by one man with the best of intentions.


The unfortunate Mr. Midgley was eventually killed at the age of 51 by another of his own helpful inventions. Suffering from polio he lost the use of his legs. To get himself out of bed he invented a harness but one day he accidentally tangled in his contraption which strangled him.


1920's Diesel electric locomotives first introduced with electric drives providing the transmission mechanism eliminating the need for a clutch and a gearbox. (Electric drives provide maximum torque at zero speed. Internal combustion engines can only provide driving torque when they are running at speed)


1922 The BBC was formed in the UK by a group of leading "wireless" manufacturers including Marconi and started a radio broadcast service. Widespread radio broadcasting started around the same time in many countries throughout the world bringing wireless into the heart of many homes and with it a new demand for batteries to power them.


1922 Light emission from silicon carbide diodes was rediscovered in the Soviet Union by self taught Oleg V. Losev. He produced a range of high frequency oscillating, amplifying and detector diodes using zinc oxide and silicon carbide crystals about which he published 16 papers on the underlying theory of operation and was awarded ten patents on Light Emitting Diodes (LED)'s, photodiodes and optical decoders of high frequency signals.

Even more amazing was his discovery of the negative resistance (dI/dV) characteristic that can be obtained from biased point-contact zincite (ZnO) crystal diodes and the possibility of using this negative resistance region to obtain amplification, anticipating the tunnel diode. See negative resistance characteristic. He used these properties to construct fully solid-state RF amplifiers, detectors and oscillators at frequencies up to 5MHz a quarter century before the invention of the transistor.

He designed and constructed over 50 radio receivers, incorporating his own tuning, heterodyning and frequency converting circuits and built a production line to produce his cristadyne radio receivers, powered by 12 Volt batteries, thirty years before the transistor radio. Inter-stage interaction inherent in using two-terminal devices to obtain gain and adjusting the cat's whiskers were problematical but the radios worked. These problems together with the difficulties of obtaining zincite which was found in commercially significant quantities in only two mines, both in New Jersey, USA led to Losev eventually abandoning the cristadyne.


Losev starved to death during the siege of Leningrad in 1942 and the original records of his works were lost.


1922 After the 1917 Russian revolution, naval engineer Nicholas Minorsky emigrated to the USA where he worked with Steinmetz. Using his knowledge of automatic steering of ships, in 1922 he published a paper "Directional stability of automatically steered bodies", outlining the principles of 3 term controllers, the basis of modern PID control systems.


1922 German organic chemist Hermann Staudinger published his theories on polymers and polymerisation. He showed that natural rubbers were made up of long repetitive chains of monomers that gave rubber its elasticity and that the high polymers including polystyrene manufactured by the thermal processing of styrene were similar to rubber. Staudinger won the Nobel Prize for Chemistry for his research.

Polystyrene was originally discovered in 1839 by German apothecary Eduard Simon however he was not aware of its significance. It was first produced on an industrial scale by IG-Farbenindustrie in 1930.


1922 Swedish engineering students Baltzar von Platen and Carl Munters working at Stockholm's Royal Institute of Technology invented the Gas Absorption Refrigerator which has no moving parts and manages to produce a cooling effect purely from a heat source such as burning gas or paraffin (kerosene). No external electricity supply was required. See how this works in the page about Heat Engines. The technology was eventually purchased by Electrolux who successfully commercialised the product.


In 1926, after reading about the death of a Berlin family killed by toxic fumes leaking from the pump of their conventional vapour compression refrigerator, Albert Einstein was deeply affected by the news and set to work with his former pupil, Leo Szilard, to develop a safer alternative system with no moving parts which could result in leaks. They developed three alternative refrigerator systems for which they were awarded 45 patents. One was a gas absorption system similar to that of von Platen and Munters. The second used water pressure from the mains water supply to provide the energy. For the third system they developed an electromagnetic pump to circulate the refrigerant. This induction pump used liquid metal contained in a sealed metal tube around which were wound external coils which carried an alternating electric current. The AC supply caused the liquid metal to oscillate back and forwards in the tube like a piston, pumping the refrigerant which was fed through the tube in the space above the metal. Though the pump contained no moving parts, it was complex, noisy and inefficient and needed an external electricity supply to power it. Their designs never went into production. Electromagnetic pumping systems later found use in nuclear power plants.


1923 The Marconi Company in Britain claimed to have made the first practical hearing aid called the Otophone. It used a carbon microphone and valve (vacuum tube) amplifier but with batteries it weighed an unpractical 7 Kg. It was not until 1953 with the advent of transistors and button cells that electronic hearing aids became truly practical.


1923 Quality engineers from the Western Electric Company working on sampling inspection theory developed graphs showing the probabilities of acceptance and rejection for different sampling plans. They identified the concepts of Consumer's Risk, the probability of passing a lot submitted for inspection which contains the tolerated number of defectives and Producer's Risk, the probability of rejecting a lot submitted for inspection which contains the tolerated percentage of defects. In 1926 they produced the first set of Sampling Inspection Tables for single and double sampling followed in 1927 by tables for determining the Average Outgoing Quality Limit (AOQL). The tables were published by Harold F. Dodge and Harry J. Romig in 1944 however these sampling and control techniques had already found wider use during World War II when standard military sampling procedures for inspection by attributes were developed by the US military and eventually published as Mil Std 105A in 1950.


The tables and techniques were designed to facilitate better production control, more efficient inspection and to avoid disputes and were very effective in achieving these goals over many years. Unfortunately they also encouraged the notion that faults were inevitable and the idea of an acceptable quality level placed a limit on aspirations to do better, effectively giving a licence to ship a few defects so long as the AOQL was acceptable. An example of "The Law of Unintended Consequences". The danger of these attitudes was finally realised in the 1980s when the public noticed that the Japanese, following principles introduced by W. E. Deming, coupled with Japanese work ethics, produced products which were significantly better than western offerings. Working to Six Sigma quality standards has been the West's response to the Japanese challenge of TQM.


"Statistics means never having to say you're certain" - Anon


1923 Danish chemist Johannes Brønsted and simultaneously British chemist Thomas Lowry proposed the Brønsted - Lowry concept of Acids and Bases which states that: An acid is a molecule or ion capable of donating a proton (That is a hydrogen nucleus H+) in a chemical reaction , while a base is a molecule or ion capable of accepting one. More simply: An acid is a proton donor and a base is a proton acceptor.

The same year Lewis proposed a more generalised concept which states: An acid is a molecule or ion that can accept a pair of electrons while a base is a molecule or ion that can donate a pair of electrons. This explains why metal oxides are basic since the oxide ion donates two electrons while non-metal oxides which accept two electrons to share with the non-metal atom are acidic.


1923 After seeing the design for a quartz crystal oscillator shown to him by fellow academic W. G. Cady, Harvard professor and inventor, George Washington Pierce, immediately recognised its potential and set about producing a much simpler design. Later that year he submitted a paper outlining his own "Pierce oscillator" to the Proceedings of the American Academy of Arts and Sciences and applied for a patent to protect the design. Its performance was no better than Cady's design but it was much simpler and cheaper, using a two terminal crystal and needing only one valve. Royalties from Pierce's patent portfolio were many times his Harvard salary.


The development of precision piezoelectric crystal controlled oscillators enabled the possibility of quartz controlled clocks which provided much better time keeping than mechanical designs.


1924 German psychiatrist Hans Berger was the first person to prove the existence of so called brain waves, electric potentials or voltage fluctuations in the human brain, using an an electroencephalograph to detect and amplify the signals. He experimented by attaching electrodes to the skull of his fifteen year old son Klaus, recording the first human electroencephalogram (EEG).


1924 The modern, moving coil, direct radiator, loudspeaker patented by Western Electric (Bell Labs) engineers Chester W. Rice and Edward Washburn Kellogg.


1924 The ribbon microphone and its converse the ribbon loudspeaker were invented by German engineers Walter Schottky and Erwin Gerlach working at Siemens. The ribbon microphone was constructed from an extremely thin concertina ribbon of aluminium placed between the poles of a permanent magnet.


1924 By sending radio waves vertically skywards and detecting the reflected signal, British engineer Edward Victor Appleton proved the existence of the ionosphere predicted by Heaviside and Kennelly in 1902. By measuring the time delay between the transmitted and reflected waves he was able to determine its altitude as 60 miles above ground. Ionospheric layers are useful in radio communications reflecting the waves around the Earth's curvature.

In 1926 he discovered a further, even more electrically conductive, layer at an altitude of 150 miles. This layer, named the Appleton Layer after him, is a more dependable reflector of radio waves reflecting the shorter radio waves, which pass through the Heaviside layer. Other ionospheric layers reflect radio waves sporadically, depending upon temperature and time of day.


Appleton's work on detecting signals reflected from distant objects formed an invaluable foundation for Britain's defence work on Radar technology before and during the Second World War and earned him the Nobel Prize for Physics in 1947.


See more about Ionisation Layers


1924 Russian emigré, military engineer and physicist Pyotr Kapitza, working at Cambridge's Cavendish Lab, invented the bipolar battery construction which improved battery energy and power density of Lead acid batteries while reducing internal impedance and manufacturing costs. Recently (2017), research is being carried out by the Fraunhofer Institute to adapt this construction for the manufacture of Lithium Iron Phosphate batteries.


He also carried out research on methods of creating very high magnetic fields and investigated their effect on the conductivity of various metals. In 1928, working with fellow physicist J. D. Bernal to create a very pure crystal of the metal Bismuth for experiments on superconductivity, he found that by passing a hot wire through the Bismuth crystal he could draw all the impurities to one end. The techniique was later rediscovered in 1952 and used by in semiconductor manufacturing where it became known as zone refining.


Returning to Russia he worked on low temperature physics and cryogenics and developed new methods of liquefying gases. Investigating the properties of liquid Helium, he discovered superfluidity, a previously unknown "state of matter". He observed that at very low temperatures, Helium and other superfluids demonstrate zero viscosity and flow without loss of kinetic energy and that, when stirred, they form cellular vortices which continue to rotate indefinitely.

In 1978, Kapitza won the Nobel Prize in Physics "for his basic inventions and discoveries in the area of low-temperature physics".


1925 Electrical recording using a microphone, an amplifier using De Forest's Audion vacuum tube (valve) and an electrical disc-cutting head, in a system invented the previous year by Joseph P. Maxfield and Henry C. Harrison of Western Electric, was adopted by the Columbia and Victor record companies. Electrical playback also became available the same year using amplifiers and the Rice-Kellogg loudspeaker.

What is surprising is that the basic technologies for implementing electrical recording and play back had been available in the telephone industry since 1877 when Edison invented the phonograph but for almost fifty years the record industry had persevered with Edison's system of direct acoustic recording on to wax cylinders or discs using large recording horns which both limited and dominated the recording environment. Similarly playback was had remained mechanical over the same period using clockwork motors, acoustic pick-ups and clumsy horns which gave out limited sound volume.


1925 Between 1925 and 1935 American engineer and politician Vannevar Bush and colleagues developed a series of analogue computers which they called differential analysers. They were capable of solving differential equations with up to eighteen independent variables and were based on interconnected mechanical integrators constructed from gears and mechanical torque amplifiers with the output represented by distances or positions. The 1935 version weighed 100 tons and contained 2000 vacuum tubes, 150 motors, thousands of relays and 200 miles of wire. Processing analogue data is a key requirement of modern control systems, however analogue values can now be represented electrically and processed in linear integrated circuits or converted to digital form for manipulation by microprocessors.


1925 Charles Ducas described a variety of practical ways for manufacturing printed circuits including etching, electroplating and printing with conductive inks. He also proposed multi-layer circuit boards and showed how to implement connections between the layers.


1926 events - continued after "THEME"



THEME: Events and Developments in Particle Physics Relating to Leptons and the Weak Nuclear Force


See also the Standard Model of Particle Physics and the Timeline of Theories, Predictions and Discoveries to put the following discoveries into context.


In 1926, following the work of Bose and Einstein in defining the characteristics and possible quantum energy states of photons (since extended more generally to bosons), Italian physicist Enrico Fermi, working with Paul Dirac, developed similar statistics to describe the energy states of matter particles, all of which obey Pauli's exclusion principle. Named Fermi-Dirac statistics, they apply to all elementary and composite particles which they determined to have half integer (1/2) spin. Despite their joint work on the theory, Dirac, a modest man, named all matter particles "fermions" in honour of Fermi, a name which has stuck.

See also Bose-Einstein statistics.


In 1930 In order to explain why the "beta particles" (high energy electrons or positrons) resulting from the radioactive break-up of the atomic nucleus, known as beta decay were emitted with different energies in apparent violation of conservation of energy laws, Pauli postulated the existence of a small a hypothetical, massless (he thought) and chargeless particle which he called the "neutron particle" (No relation to Chadwick's comparatively massive neutron). This accounted for the different energies and enabled energy, momentum, and angular momentum (spin) to be conserved.


In 1933 Fermi, further developed the theory of beta decay and resolved the confusion by renaming "Pauli's neutron particle" as the "neutrino" (the Italian equivalent of "little neutral one"). Up to now (2019), the mass of the neutrino, though very small, has not been determined with certainty.

Fermi also showed that the electrons emitted in beta decay did not seem to come from the cloud of electrons that orbit the nucleus but instead appeared as new electrons, emanating from the nucleus itself. He reasoned that the weak force must be weaker than Wigner's strong force because beta decay is relatively common within atoms, yet it requires a lot of energy to break the strong force and split the nucleus of an atom.

He explained beta decay in terms of a weak nuclear force with no range, entirely dependent on physical contact and showed that during beta decay, a neutron spontaneously decays into a proton by emitting an electron, thus leaving a proton with a positive charge. He also predicted that the extra particle emitted in beta decay is the neutral (uncharged) antineutrino (also called and electron neutrino). See Feynman diagram explaining beta decay.


In 1937, searching for Yukawa's meson Street and Stevenson discovered the muon, the second generation lepton, like an electron but 207 times heavier.


In 1956 American physicists Clyde Cowan and Frederick Reines proved that both electrons and electron neutrinos are emitted during beta decay. Because neutrinos have no electric charge, they cannot react with photons which carry the electromagnetic force. Neither can they indicate their presence by creating tracks in a cloud or bubble chamber since without charge they can not ionise the chamber's vapour. Instead the experimenters carried out their investigations at the Savannah River Power Plant where they rigged up an improvised scintillation detector to detect the presence of neutrinos by monitoring the results of their reactions. They were able to capture neutrinos from the nuclear fission reactor in two tanks of water where they interacted with protons, creating neutrons and positrons. On colliding with an electron the electron/positron pair was annihilated creating a pair of gamma rays which were detected by flashes of light in scintillator material surrounding the tanks. These light flashes were in turn detected by photomultiplier tubes which indicated the capture of about three neutrinos per hour.


In 1947 Cowan and Reines had suggested the concept of a family of particles which react to the weak nuclear force, composed of light weight, charged particles, including the electron and muon together with their associated neutral particles.

The following year Belgian physicist Léon Rosenfeld provided the collective noun of Lepton (from Greek, leptós - "small, thin, delicate") for this family of small mass particles. They were not however aware at the time of the existence of the massive third generation of these particles the tau and its neutrino.


Fermi another of physics greats was called the "Pope of Physics" by his peers. He was awarded the Nobel Prize in 1938 for his "demonstration and production of new nuclear elements". See also Fermi's Atomic Pile.

Reines was awarded the Nobel Prize in 1995 for his work on neutrino physics, but Cowan had sadly died in 1974.


In 1957 Harvard physicist Julian Schwinger realised from symmetry considerations that three different bosons must be involved in transmitting the weak force to take account of all the possible different ways the protons and nucleons can interact in the nucleus. Two of these bosons were required to exchange positive and negative charges, now called the W+ and W- (weak) bosons and because of the limited range over which the weak force is felt (10-15 m) they were thought to be massive. Their mass is in fact about 85 times that of the neutron. A third neutral boson, the Z0, was required for reactions in which no charge was transferred. He thought (incorrectly) at the time that this neutral boson was a massless photon.


Schwinger asked one of his graduate students, American-born son of Russian immigrants, Sheldon Lee Glashow to investigate further his three boson model of the weak force. Glashow suspected that, since the heavy W bosons carried a charge, they would be influenced by electromagnetic forces and that the weak force must be somehow linked to the electromagnetic force.

By 1958 Glashow believed (also incorrectly) that he had derived a unified theory encompassing both the weak force and the electromagnetic force, later called the electroweak force. At least it was the first step.

In 1960 he further announced that the neutral Z0 boson was in fact also a massive particle, about 13% heavier than the W bosons, and responsible for carrying neutral currents in reactions involving no exchange of charge. Unfortunately he was not able to provide a consistent gauge symmetry justifying all of these interactions and there was still no explanation of why any of these bosons had mass when the photon, also a boson, was massless.

(The notion of "neutral currents" is yet another confusing name dreamed up by particle physicists. It has nothing to do with electricity and no charge transfer is involved - it simply refers to the exchange of the neutral Z0 particles.)


Nevertheless, in 1964 Glashow, together with Stanford physicist James Bjorken, in their search for an appropriate gauge symmetry, went on to predict the existence of a fourth quark which they named the charm. They proposed a scheme in which there were four types each of quarks and leptons with the charm forming a partner to the strange quark creating a parallel symmetry between quarks and leptons which fitted nicely with the fledgling ideas for a Standard Model describing the physics involved. They chose the name charm because they were "fascinated and pleased by the symmetry it brought to the subnuclear world".

The proposition was reinforced in 1970 by Glashow working with fellow theorists Greek John Iliopoulos and Italian Luciano Maiani at Harvard who showed its applicability to more complex weak force interactions.

Their predictions were validated in 1974 with the discovery of the J/ψ meson, the first particle made of charm quarks, by Richter and Ting.


In 1962 American physicists Leon M. Lederman, Melvin Schwartz and Jack Steinberger, working at Brookhaven's AGS (Alternating Gradient Synchrotron) accelerator, isolated the muon-neutrino. Despite the abundance of neutrinos in the Earth's environment, because they have no charge, they are very difficult to detect and because they are very small and light and move at almost the speed of light, they pass almost unhampered through all matter, even the Earth, which makes them very difficult to contain and pin down. Furthermore, because of the neutrino's deep penetrating properties, neutrino detectors need shielding to prevent contamination from external sources.

The AGS accelerator produced a 15 GeV intense proton beam directed onto a beryllium target creating a beam composed mostly of pions which decayed into a collimated beam of muons and neutrinos which smashed into the neutrino detector.

The first stage of the detector was a 44 foot (13.5 m) thick steel shield weighing 200 tons, made from armour plates from scrapped warships, which stopped all particles except the neutrinos which continued on into a 10 ton spark detector. This chamber was constructed from a series of 90 aluminium plates, each an inch (2.54 cm) thick with neon gas filling the spaces between the plates.

To reduce the interference from penetrating muons contained in cosmic radiation, the measurements were active only during short pulses of 3 microseconds when the accelerator delivered its neutrino particles. During the 8 month duration of the experiment, data was accumulated for 25 days, but because of the short pulses, the effective total measurement time for the detections was only 6 seconds. Each neutrino pulse contained 107 neutrinos and during the 6 seconds the experiment was active, a total of 1014 neutrinos went through the spark detector.

As neutrinos flooded through the spark chamber, one would occasionally strike a proton in an aluminium nucleus, producing a neutron and either an electron or a muon, according to theory. This charged particle would ionise the gas, creating a visible spark track when high voltages were applied across the plates. These spark tracks were detected and photographed providing a picture of the path of the particles through the detector. A total of 51 neutrino interactions were registered with tracks observed in the spark chamber showing that when muons were produced, they passed through the aluminum plates without further interaction. Any electrons produced however would not pass through the plates. It was therefore concluded that the neutrinos in the beam which which caused collisions in the detector to produce muons were likely different from the electron-neutrinos involved in beta decay which produced electrons and must therefore be muon-neutrinos.


Lederman, Schwartz and Steinberger were awarded the Nobel Prize in Physics for "the neutrino beam method and the demonstration of the doublet structure of the leptons through the discovery of the muon neutrino".


In 1964 British physicist Peter Higgs, working at the University of Edinburgh, provided the clue which enabled the conflicting symmetry requirements of the weak and the electroweak forces to be resolved and enabled others to explain how Glashow's heavy bosons and other elementary particles acquired their mass. He (and several others independently including Belgians, François Englert and Robert Brout) suggested the existence of a force field pervading all of space affecting the behaviour of elementary particles. Named the Higgs Field in his honour, he predicted that the field would be due to the presence of force-carrying neutral particles, bundles of electromagnetic energy called bosons which are very small, but also very heavy. Known by physicists as the Higgs Boson, the popular press has irreverently named this elusive particle the "God Particle", a name deplored by physicists. Proof of its existence did not come until 48 years later in 2012, when a Large Hadron Collider constructed for this purpose by CERN in Switzerland was able to produce the particles and to detect their presence.


The Higgs field is a scalar field. It has magnitude at every point in space but no particular direction. The bosons associated with this field are extremely heavy with a mass of 125 GeV/c2 which is 133 times the mass of a proton. They were produced immediately after the Big Bang and like all heavy particles, they also have very short mean lifetimes, in this case about 1.6 x 10-22 seconds, decaying into quarks, W bosons, gluons and other particles. This raises three puzzling questions:

  • If the boson's lifetime is so short, why have they not all disappeared, together with their Higgs field, in the 13.8 billion years since the Big Bang?
  • The explanation given is that the Higgs field is a background field which has been there since the Big bang and persists no matter how many ripples or bosons it may contain.


  • If the bosons are so massive and they appear to fill the Universe, how is it that we are not constantly tripping over them?
  • The explanation given is that the bosons are not large solid particles in the conventional sense. They are not clogging up space. They are force carrier particles, bundles of energy which behave like ripples on the Higgs field, just like photons behave like ripples in the electromagnetic field and their mass is Einstein's notional equivalent of their energy content.

    The Higgs field is an energy field which was created at the time of the Big Bang with a zero magnitude but very shortly after the event as the Universe cooled down the field coalesced with a constant non-zero value.

    The bosons therefore did not create the field, they were instead generated by the field.


  • If the Higgs pervades all of space, isn't it what scientists used to call the aether?
  • No. The Higgs is not a transmission medium, it is a field, just like the electromagnetic or gravitational fields which exert their forces in the vacuum of space.


And a more conventional question:

  • How does the Higgs field give fundamental particles their mass?
  • The answer is that the Higgs field does not actually transfer any mass to these particles.

    The Higgs field is a background field with which elementary particles such as quarks, leptons and the W and Z bosons interact and these interactions involve energy which couples the particles to the field. This continuous coupling force impedes the movement of the particle through the field and the strength of this resistance or drag depends on the nature of the particle involved. The stronger the coupling is, the more force is required to accelerate the particle through the field. The magnitude of the coupling force thus determines the particle's resistance to acceleration and this is interpreted as the particle's "inertial" mass which compares to its "gravitational" mass as in classical physics. The greater the coupling force, the more massive the particle is.


Higgs theories were soon used by Glashow, Weinberg and Salam (see next) and others who introduced the concept of symmetry breaking to explain how the conflicting symmetry requirements of the weak and electroweak reactions could be reconciled thus enabling the unification of the electromagnetic and the weak nuclear forces.


The concept of Higgs boson however remained just a theory until 2012 when the actual discovery of the Higgs boson was achieved by an international team at CERN in Switzerland.


2008 After ten years in construction, CERN's Large Hadron Collider (LHC), currently the worlds largest high energy particle accelerator (2019), was finally put into service in Geneva. It is located in a circular tunnel 27 kilometers (17 miles) in circumference, 100 metres (330 ft) below the border of Switzerland and France.

The LHC works on the principle that by accelerating relatively large subatomic particles such as protons or neutrons, collectively called hadrons, to a very high speed and smashing them into each other will cause the particles to disintegrate into smaller particles which may themselves disintegrate further or recombine in different configurations to form new and sometimes bigger particles. The debris from such collisions which may include examples of familiar particles, as well as previously unknown particles, is examined in a detector. By smashing together beams of protons or other atomic nuclei, it was hoped that Higgs bosons could be found in the debris.


A collaborative project, the LHC was designed and built by over 10,000 scientists and engineers from universities and laboratories in over 100 countries led by Welsh physicist Lyn Evans.


The design of the experiment and the LHC needed to carry it out had several major problems to overcome, all related to the very large size of the elusive boson.

  • To create large particles such as the Higgs boson from particle collisions you have to start with very large particles and accelerate them to extremely high energies, much greater than had been achieved previously, requiring an extremely powerful accelerator.
  • When large particles break up in collisions, the debris consists of a random shower of numerous different particles and the target Higgs bosons must be separated from these "background" particles.
  • Though it was not known precisely at the time, the mass of the Higgs boson was estimated to be over 130 times the mass of the individual protons to be smashed up to reveal it. How could this be possible? (See the explanation)
  • Because of its large size, the percentage of Higgs bosons in the total number of particles created in each collision will be very small, around 1 in 10 billion.
  • Large particles typically decay very quickly. The Higgs boson decays almost instantly in only 1.6 x 10-22 seconds after it is produced so only the products of the decay can be observed, not the particles themselves. The presence of the Higgs boson can only be deduced from evidence of characteristic patterns of its decay. At the time these were not well defined.
  • Since it interacts with all the massive elementary particles of the standard model, the Higgs boson has many different processes through which it can decay.
  • Like the original collision, the decay of the Higgs boson also results in another random shower of particles, not all of which are visible. The most common of these are the following particle pairs associated with different possible decay modes, but these may in turn be involved in further reactions creating yet more variety.
  • b + b

    (b quark and its antiquark)

    τ++ τ

    (τ tau lepton and its antiparticle)

    γ + γ

    (Two photons also called gammas)

    g + g

    (Two gluons)

    W++ W

    (W boson and its antiparticle)

    Z0 + Z0

    (Two Z bosons)

    See the Standard Model for a description of these particles.

  • There are in addition many other processes resulting from the original collision which can produce these same particles so it is difficult to tell whether the particles come from the Higgs decay or from the background debris.

  • Some of these Higgs decay modes are more likely than others, the most likely being quark pairs. Rather than this making it easier to find Higgs decay patterns, the same particles are also more likely to be found in abundance in the background debris making it more difficult to identify the Higgs related products. On the other hand, photon pairs resulting from Higgs decays are rarer, but they are also rarer in the background debris which makes them easier to identify.


LHC Data Gathering and Analysis Because the Higgs boson cannot be observed directly researchers can only observe its breakdown products and their behaviour. To make the task even more difficult, they were not even sure of precisely what the breakdown patterns might be. Very special detectors were needed to gather data from the debris of hundreds of trillions of collisions in the LHC and this data had to be analysed in the search for signals or patterns of particles characteristic of the expected decay of the Higgs boson before any conclusions about its existence could be reached.


The raw data gathered per event is around one million bytes (1 MB), produced at a rate of about 600 million events per second so that the data flow from the detectors was about 1 PB/s (Petabytes per second).

(Note: 1 Petabyte = 1015 bytes. This is equivalent to the data on 200,000 DVDs since a DVD stores about 5 Gigabytes (GB) of data.)

Pre-processing, using specialised algorithms, filtered the data down in two stages to between 100 or 200 events of interest per second and this was recorded onto servers at a rate around 1.05 GB/s for subsequent analysis by 73,000 CPU processor cores at the CERN data centre and distribution for further analysis to a worldwide grid of over 170 collaborating data centres in 40 countries with a total of 740,000 CPU cores and over 600 petabytes storage. Even after filtering out 99.99% of the captured data at the source, this still leaves around 50 to 70 petabytes of data, equivalent to around 10 to 15 million DVDs, per year to be analysed.

This required massive computing power on a scale not seen before.


No wonder the search took so long. !


The LHC Accelerator is basically a synchrotron 27 kilometres in circumference designed to collide contra-rotating particle beams of either protons or lead nuclei after accelerating them up to within 3 metres per second of the speed of light or about 0.999999991 X 300,000,000 metres per second, giving the protons an energy of up to 7 Tera electronVolts (7 TeV = 7 X 1012eV, or 1.12 microjoules) per nucleon, and the lead nuclei an energy of 574 TeV (92.0 µJ).

The two particle beams travelling in opposite directions are constrained into two separate parallel circular paths by 1,232 dipole electromagnets spaced around the ring, each path with electromagnets of opposite polarity so that the opposing beams both bend in the same direction. Another 392 quadrupole magnets are used to keep the beams focused around the centre line of the totoidal chamber and the beams are arranged to intersect at four points around the ring. To achieve the very high magnetic fields necessary the magnets are cooled by liquid helium to -271.3 degrees C and the system consumed a massive 120 megawatts of electrical power while in operation, as much as one third the neighbouring canton (province) of Geneva.

At the design speed, a particle will make one complete revolution around the 27 kilometre main ring in less than 90 microseconds (µs) making 11,245 revolutions every second. As with all synchrotrons, the particles will be bunched together, in this case, into 2,808 bunches spaced 10 metres apart around the ring with 115 billion protons in each bunch giving rise to 800 million proton collisions per second.

Because the particle beams are bunched and synchronised with the microwave frequency accelerators, the timing and location of the collision events can be tightly controlled.


The LHC used six Particle Detectors, including the 2 large CMS and ATLAS general purpose detectors and 4 smaller specialist detectors which are located at the beam intersection points to detect and identify the particles resulting from the collisions. The CMS and ATLAS, were each programmed to look for different decay products associated with the Higgs boson. This was because the number of Higgs boson decays was very small compared with the number of background decays from the main collision so that the Higgs decay products were very difficult to distinguish. Using different experimental methods to identify different Higgs decay patterns provided corroborating evidence and improved the statistical significance of the results.

See a photo and detailed description of LHC's CMS Particle Detector.


An even larger particle accelerator, the Superconducting Super Collider (SSC) had been planned in the USA. Design proposals were completed in 1986 for a collider costing $4.4 billion with a ring circumference of 87.1 kilometres (54.1 mi) giving particle energies of 20 TeV per proton, more than double the LHC's 7 TeV. Work was started at Waxahachie, Texas but the project was dogged by lukewarm support. Though President Reagan supported the project, his budget director argued that "It would achieve little more than make a bunch of physicists very happy" and later Presidents were less than fully committed to this "Big Science" project. Neither did the project get the international participation it had hoped for. The logic was, "Why would other countries contribute to a project to give the US the leadership position in high energy physics?". The Japanese were not interested and besides this, Europe already had the LHC project at CERN.

By 1993 cost estimates had ballooned to $11 billion and after $2 billion had been spent and 23 kilometers of tunnels had been excavated beneath the Texas prairie the project was cancelled. Budget priorities had been transferred to the $25 billion International Space Station.


2012 The Higgs Boson also known as the "God Particle", predicted by Peter Higgs in 1964, was finally detected in the Large Hadron Collider (LHC), constructed for this purpose at CERN research labs in Switzerland.


For any discovery in particle physics, the signal should be at least at '5 sigma level' over the background, which is equivalent to a one in 3.5 million chance of the event being due to a random statistical fluctuation. After sifting through data from more than 300 trillion (3 X 1014) proton-proton collisions the research teams at CERN claimed they had sufficient evidence to confirm with more than 99.996% certainty that they have verified the existence of the elusive Higgs Boson with a mass of 125.09 GeV/c2, 133 times the 938.3 MeV/c2 mass of the proton.


The LHC is one of the most expensive scientific instruments ever built. It took a decade to construct and is reported to have cost about $4.75 billion. Taking into account its massive operating costs of about $1 billion per year, it is estimated that the total cost of finding the Higgs boson was about $13.25 billion.


What did we get for the money?

Justifications of the huge expense are hard to find. These are four that I have found. Please email me if you can find more.

  • According to Peter Higgs, it was about improving our "Understanding the World"
  • It is generally recognised by the scientific community that verifying the existence of the Higgs Boson filled an important gap in our knowledge of the universe as represented by the Standard Model, thus validating the model's applicability.
  • On a more mundane level, it provided an expensive piece of equipment that could find use in further particle physics experiments.
  • And like President Reagan's budget director said about the rival US Superconducting Super Collider, "It kept a bunch of physicists very happy".

Higgs and Englert were awarded the Nobel Prize for physics in 2013 for "the theoretical discovery of a mechanism that contributes to our understanding of the origin of mass of subatomic particles", one year after their existence had been confirmed at CERN. Sadly Brout had died in 2011 and was therefor not elegible for the award.


In 1967 Glashow with his fellow high school classmate, now American physicist, Steven Weinberg at Harvard, and independently a few months later, Pakistani Abdus Salam at Imperial College London, published papers describing what is now the accepted electroweak theory. Using the Higgs field theory the group (known as GSW) explained how the W and Z bosons have mass in weak nuclear reactions but not in the electroweak reaction.


The phenomenon is analogous in some ways to the behaviour of the magnetic properties of ferromagnetic materials. When an unmagnetised, red hot iron bar is cooled in an external magnetic field to below its transition temperature (known as its Curie point), the magnetic domains of the atoms in the material align with the external field and the bar becomes magnetic. Conversely when a magnetic bar is heated to above its Curie point, its magnetic domains are scattered and the bar loses its magnetic properties.


The GSW group's assumption was that in the earliest moments after the Big Bang during which the temperature of the Universe was extremely high, the energy state of the Higgs field was zero. In this situation there was a gauge symmetry applicable to the unified electroweak force in which all four of the bosonic particles, the W+ and W-, the Z0 and the photon, were massless. As the Universe cooled however it reached a critical energy state known as the electroweak unification energy state estimated to be around 100 GeV equivalent to a temperature of around 1015 degrees K, when a system phase transition occurred. Below this critical transition temperature the energy level of the field becomes stable and non-zero even in empty space and the "electroweak" gauge symmetry of the Higgs field is broken and the two separate "weak" and "electromagnetic" gauge symmetries apply in its place. This is known as spontaneous symmetry breaking and as a consequence, since the Higgs field carried the forces of all four bosons, it would also affect the four bosonic forces. Once broken, the W and Z bosons became very heavy due to interactions with the Higgs field while the photon, which does not react with the Higgs field, remained massless. Thus the electroweak force splits into the two separate electromagnetic and weak forces experienced in our current environment. If the energy levels of the particles were increased once more to above the transition temperature, such as in a powerful particle accelerator, the particles would return to their initial state and the original electroweak symmetry would apply.


The GSW theories of the electroweak interactions were validated experimentally in two stages.

In 1973 a team led by Paul Musset working at CERN's Gargamelle bubble chamber were able to detect the neutral currents predicted by the electroweak theory resulting from events in which a neutrino is scattered from a hadron (proton or neutron) without turning into a muon - the signature of a hadronic weak neutral current. Since the uncharged neutrinos are normally undetectable, their presence was indicated in the bubble chamber tracks by a few electrons apparently starting to move without a visible impetus. This effect was explained as being due to the momentum imparted to the electron by the interaction with an unseen neutrino.

In 1983 Italian, Carlo Rubbia and Dutch, Simon van der Meer working at CERN were able to confirm the existence of W and Z bosons in high energy proton-antiproton collisions.


Glashow, Weinberg and Salam were awarded the Nobel Prize in 1979 for their "contributions to the unification of the weak and electromagnetic interaction between elementary particles".


Rubbia and van der Meer were awarded the Nobel Prize in 1984 for "the discovery of the field particles W and Z, communicators of weak interaction".


In 1975 Martin Lewis Perl and his team experimenting with high energy electron-positron collisions at SLAC discovered the tau lepton also called the tauon. This was a surprising indication of a possible third generation of fermions, coming only a year after the discovery of the charm quark had confirmed the existence of the second generation.

The tau is the most massive of the lepton family, having a mass about 3,490 times the mass of the electron, 17 times that of the muon and twice the mass of the proton. Because of its high mass it decays very quickly after it is created with a lifetime of only 3 x 10-13 seconds - 100,000 times shorter than that of the muon. Its existence was consequently difficult to discern in their complex multilayer particle detector and had to be inferred from measurements of what was going missing from the debris resulting from the particle collision.


Perl was awarded the Nobel Prize for the discovery of the tau lepton.


In 2000 evidence confirming the existence of the tau-neutrino, the last of the 12 predicted fundamental fermions, was published by a team of 54 physicists from the United States, Japan, Korea and Greece working on Fermilab's DONUT (Direct Observation of the NU Tau) experiment. Faced with the similar challenges experienced by the Brookhaven team tracking the elusive muon-neutrino, their experiment followed a similar approach except that instead of using a spark chamber to detect the neutrinos they used an alternative emulsion based detector design. They also needed a much higher energy proton beam to create the much heavier tau leptons monitored in the detector.


The Fermilab team used the Tevatron accelerator to produce an 800 GeV proton beam fired at a tungsten target from which they expected the collisions to produce a stream of heavier particles some of which decay into tau leptons and tau neutrinos as well as other neutrino types. The resulting particle stream was directed through an elaborate shield of magnets, iron and concrete 36 metres (118 feet) long to block all the particles except neutrinos which continued on into a 90 cm (3 feet) detector.

The DONUT detector consisted of 260 Kg (575 lbs) of confusingly named "nuclear" emulsion sandwiched in layers between a series of 1 mm (0.04 inch) thick steel plates. The emulsion was coated onto precisely located plastic substrates in layers 0.1 mm (0.004 inch) thick and was designed to indicate the presence of charged nuclear particles by the ionisation of silver halide acting essentially like photographic emulsion or film. Since neutrinos carry no charge, the presence of tau neutrinos in the incident beam was inferred by the presence of charged tau leptons detected in the emulsion.

Some of the neutrino particles entering the detector interact with iron particles in the steel plates to produce charged particles (leptons) which continue across the detector creating a series of tiny black grains, each less than 1 micron (0.00004 inch) in diameter, as they pass through, and react with, each layer of emulsion. By connecting the small black dots left by particles passing through the series of emulsion layers, the paths of the particles through the detector could be reconstructed. Most of these particles pass through the layers undeflected, but the short-lived tau particle quickly decays into another particle, a muon, resulting in a distinctive short track leading to a change of direction of the track which appears as a kink (a Feynman vertex). Known as its signature, this kink in the track is unique to the tau lepton and indicates that the event was initiated by a tau neutrino.

Subsequent conventional stages in the detector included a magnet stage to measure the charge on the particles, a scintillator and drift chamber to track the particles emerging from the emulsion targets, a calorimeter to measure their energy and and a spectrometer to confirm the presence of muons resulting from tau decay, extending the overall length of the detector to 15 metres (50 feet).


The tau neutrino interactions were extremely rare with only about one in one million, million interacting with an iron nucleus to produce a tau lepton. The initial experiments were carried out in 1997 but it took three years to analyse the data and to identify the relevant tracks. Out of six million observed tracks, the team isolated just four tracks containing the telltale kink of the tau neutrino track, enough to confirm its existence.


As in Femilab's publications about the discovery of the top quark, the contributions of all of the scientists involved in the discovery of the tau neutrino were recognised in the papers publicising this work and none were singled out for special recognition.


See also Quarks and the Strong Nuclear Force





1926 Alfred Lee Loomis, a successful investment banker with a mathematics and science degree from Yale and a law degree from Harvard, used his immense wealth to pursue his interest in science by setting up his own research facility, the Loomis Laboratory, where he lived in Tuxedo Park, a residential enclave of the rich and famous in New York which gave its name to men's formal attire.


After World War I, in which he volunteered for military service, Loomis had amassed a fortune investing in utility companies during a period of rapid expansion collecting directorships in many banks on the way. He anticipated the 1929 Wall Street Crash converting his holdings into cash beforehand and buying depressed stocks cheaply afterwards. Accused of profiting from inside knowledge from his many business directorships and political contacts, a practice which was questionable but not considered illegal in those days, he withdrew from the financial business and devoted his many talents to his first love, the advancement of science, which he funded from his own resources.


Loomis's lab was not simply the plaything of a rich dilettante, it initially undertook serious research into high energy acoustics, chronometry, spectrometry and electro-encephalography. During the 1930s Loomis and his team worked on nuclear physics and radar projects and Europe's top physicists including Albert Einstein, Werner Heisenberg, Niels Bohr and Enrico Fermi as well as radio pioneer Guglielmo Marconi visited the lab which Einstein called the "Palace of Science".


The lab's military developments and researches into the possibilities of radar were at first scorned by the US military but their attention was grabbed in 1940 when the British Tizard Mission sought the help of the Tuxedo Park lab in the manufacture of Randall and Boot's cavity magnetron. It had a peak power output of over 10 kilowatts at a wavelength of 10 centimetres (3GHz frequency), over a thousand times more powerful than the best American transmitter. Loomis was quickly appointed by Vannevar Bush to the National Defence Research Committee as chairman of the Microwave Committee and within six weeks he founded the famous MIT Radiation Laboratory, to which he transferred his Tuxedo Park activities. Known as the Rad Lab, its mission was to develop microwave radar systems based on the magnetron. See how the magnetron works.

Ernest Lawrence of the University of California helped Loomis to assembled a team of gifted young physicists to staff the Rad Lab and Loomis in turn helped Lawrence secure the funding for his second, "giant", (184 inch) cyclotron.

In 1942 when the highly secret programme, then known as the Manhattan Engineering District, later named the Manhattan Project, was set up to develop the atomic bomb the Rad Lab provided many of the early recruits.


The Rad Lab was at the forefront of fundamental theory and developments of microwave components and systems engineering during the war years until it was closed in 1945 after the war ended but its work was highly secret. Its results however were published in 18 volumes after 1947 as the MIT Radiation Laboratory Series, edited by Louis N. Ridenour which became the microwave engineers' bible.


1926 Frenchman Cesar Parolini devised improved additive printing and plating techniques for printed circuit manufacturing methods, some of which had been described years before, but not implemented by Edison.


1926 Waldo L. Semon an American chemical engineer invented plasticised poly vinyl chloride (PVC). The plasticisers are smaller, oily molecules interwoven with the long polymer chains which allow them to slide over eachother and give the plastic its characteristic flexibility. Without these plasticiser additives, PVC would be too brittle. Originally discovered by Baumann in 1872, PVC is now used extensively for insulating wires and cables.


1926 German engineers Eckert and Karl Ziegler patent first commercial injection moulding machine.


1926 German professor of physics at Leipzig University, Julius Edgar Lilienfeld emigrated to the USA and filed a patent for what would today be called a field effect transistor. It consisted of a semiconducting compound sandwiched between two metal plates, one of which was connected to a current source and the other connected to the output. The resistance of the semiconductor between the plates could be varied by means of a variable electric field created across it by a control signal connected to a third plate at the side of this sandwich and insulated from it. It worked in a way analogous to a vacuum tube and in 1930 Lilienfeld was granted a patent for "A method and apparatus for controlling electric currents". Other than Lilienfeld, nobody at the time seems to have recognised the device's potential and it faded into obscurity until it was rediscovered by William Shockley's patent attorneys, much to Shockley's chagrin when he independently conceived a similar device 20 years later.


1926 American engineer and physicist, Robert Hutchings Goddard successfully launched the world's first liquid fuelled rocket which he had designed and built. Between 1926 and 1941 Goddard and his team launched 34 rockets, achieving altitudes as high as 2.6 km (1.6 mi) and speeds of up to 885 km/h (550 m.p.h.).

He was the first scientist to realise the potential of missiles and space flight and contributed in bringing them to realisation.


Goddard carried out extensive theoretical, experimental and practical work on rocket technologies from which he was awarded 214 patents for his inventions. Two of his patents, awarded in 1914, for a "multi stage rocket" and a "liquid fuelled rocket, fuelled with gasoline and liquid nitrous oxide" were important milestones in space flight.

In 1919 he published "A Method of Reaching Extreme Altitudes" outlining the mathematical theories of rocket flight and his research into solid-fuel and liquid-fuel rockets which is regarded as one of the classic texts on the science of rocketry and is believed to have influenced the work of German rocket pioneers Hermann Oberth and Wernher von Braun.

Like the Wright brothers, Goddard was not just concerned with propulsion, he also recognised the importance of three-axis flight control which he successfully achieved by means of control systems using gyroscopes and steerable thrust.

His ideas were ahead of his time and often met with ridicule though he was supported by the Smithsonian Institution and after 1930 by the Guggenheim family. The importance of his work was not fully recognised by the public until after his death in 1945.


1927 In the USA a Lead Acid car battery cost $70 while a typical car cost $700. Today a car battery still costs $70 while car prices have skyrocketed by comparison.


1927 Invention and patent application by French company Chauvin Arnoux for the "Contrôleur Universel", the forerunner to the Multimeter. Despite this patent, the invention was to become copied throughout the world.


1927 In a technical analysis of closed loop control systems, American engineer Harold Steven Black working at Bell Labs demonstrated the utility of negative feedback in the design of telephone repeater amplifiers to reduce distortion. Previous studies on feedback control systems by Airy and others (and later by Nyquist) focussed of system stability. Black investigated ways of achieving the low distortion necessary for high capacity multiplex channels and showed that by inserting a sample of the amplifier output signal, in reversed phase, into the amplifier input the degree of distortion due to the amplifier could be reduced to almost any desired level at the expense of amplification.


1927 Generic patent for flexible printed circuits as well as three dimensional circuits and printed inductors by applying conductive materials to a flexible substrate was published by Frederick Seymour.


1927 Mormon farm boy from Idaho, Philo Taylor Farnsworth conceived the idea of the world's first practical all electronic television system while still in high school. An electronic system had been proposed earlier by Campbell Swinton but due to the primitive state of the technology at the time it was never built. Farnsworth built a working system using the Farnsworth orthicon or image dissector tube and patented his design in 1927 while still only 21 and successfully fought off the patent claims from the mighty RCA. Nevertheless despite paying royalties to Farnsworth, RCA ultimately found ways around the patents and promoting their own man, Zworykin, as the originator of the television system they finally put Farnsworth out of business. Like Armstrong who had similar battles with RCA, Farnsworth's private life suffered and he became an embittered alcoholic in his early 30's. He spent much of his later life and all of his money in a fruitless pursuit of nuclear fusion.


1927 The Quartz Clock invented by Canadian born Warren Marrison and American Joseph Horton, engineers working at Bell Labs. They demonstrated the superior accuracy of clocks using crystal controlled oscillators kept in time by the regular vibrations of a piezoelectric quartz crystal. Initially they were used for precise telecommunications frequency standards but today they are found in every battery powered quartz watch and they provide the microprocessor system clock in every personal computer.

See how the techology works in a modern Quartz Watch.


1927 British engineer Thomas Graeme Nelson Haldane designed and patented the first practical domestic heat pumps, devices which could be used for both heating and cooling. He built small experimental heat pumps for extracting heat from mains water in Scotland.

The principle had first been proposed by Lord Kelvin in 1852 using air as the working fluid in a system he called a heat multiplier. At that time, when the UK had a plentiful supply of coal, there was no commercial interest in his idea.


1927 German physicist Friedrich Hund was the first to notice the possibility of the phenomenon of quantum tunnelling which he called "barrier penetration" a process by which a particle can appear to penetrate a classically forbidden region of space passing from point A to point B without passing through the intermediate points. This is a further manifestation of de Broglie's wave - particle duality theory with the electron acting like a wave rather than a particle. The phenomenon can be characterised by Schrödinger's wave equation which tells us that the energy associated with an electron is not discrete but has a probabilistic level. As a consequence a certain number of electrons will have more than enough energy to jump an energy gap that would normally be too wide. The effect is that electrons appear to tunnel through a barrier which we would normally expect bar them.


1928 The process of nuclear alpha decay first described by Rutherford in 1899 was explained by Russian-born American George Gamow using the quantum tunnelling theory described the previous year (1927) by Hund. Expandng on Hund's theory, Gamow applied it to the alpha particles which he speculated were trapped in potential wells within the mass of charged particles in the atomic nucleus and held there by forces (now known as the strong force) forming a barrier to their escape. Because of the probabilistic energy levels of the particles, there was a tiny (but non-zero) probability of an alpha particle tunnelling through the barrier and appearing on the other side to escape the nucleus.


The following year Gamow devised the liquid drop model of the atom to describe how this process can occur. It envisages the neutrons and protons in the atomic nucleus behaving like molecules in a spherical drop of incompressible liquid and held within the drop by the surface tension of the liquid in a way which is analogous to the nuclear force which holds them in the atomic nucleus. If the equilibrium of the nucleus is disturbed by an increase of energy such the absorption of a neutron it will become unstable and the spherical nucleus may become distorted into a dumbbell shape quickly splitting into two similar fragments forming two new nuclei. These two similarly charged fragments, free from the nuclear force holding them together, will strongly repel eachother flying apart and releasing energy in the process.

Gamow's liquid drop model provided a crude but simple description of the process of nuclear fission and he was able to use it to derive a relationship between the half life of the decay, and the energy of the emission. The model was subsequently used extensively by Niels Bohr to explain nuclear fission.


1928 Scottish physician and microbiologist Alexander Fleming, experimenting with the influenza virus at St Mary's Hospital in London, accidentally discovered penicillin, the world's first naturally derived antibiotic. A scientific breakthrough which many consider the most important in the twentieth century.

Antibiotics are medicines that help to stop or prevent infections caused by living bacteria by killing them or preventing them from copying themselves or reproducing. The word "antibiotic" means "against life".

In 1922, Fleming had discovered lysozyme, an enzyme with weak antibacterial properties that inhibited bacterial growth. He also found lysozyme in fingernails, hair, saliva, skin, and tears, however he found that lysozyme was effective against only a small number of non-harmful bacteria.

In 1928, he started to research common staphylococcal bacteria which have long been recognised as one of the most critical bacteria that cause disease in humans. Returning to his laboratory after a two week vacation, he found a Petri dish containing staphylococcus bacteria, with its lid no longer in place, had by chance become contaminated with a blue-green mould called Penicillium notatum. He noticed that there was a clear ring surrounding the mould where the bacteria had been unable to grow and determined that the mould had developed on an accidentally contaminated staphylococcus culture. He quickly recognised that, in preventing the growth of the bacteria, this mould had the potential to be used as an effective antibiotic.

Note that influenza is a viral infection not a bacterial infection and viral infections are not affected by antibiotics.


Throughout history there have been the occasional anecdotes about various moulds being used to treat infections, though it is not known whether any of these were related to penicillin. Fleming was the first to experimentally discover that a Penicillium mould secretes an antibacterial substance, and the first to concentrate the active substance involved, which he named penicillin.

He published his findings in 1929 but he was disappointed that his important discovery failed to attract the interest or support of the medical profession. His problems were, the difficulty of isolating of the mould's active compound, the difficulty of producing the penicillin in large amounts and also, that there had been no clinical trials to prove its effectiveness in treating infections in humans or even animals.


Penicillium mould naturally produces the antibiotic penicillin but subsequent efforts to extract and purify the unstable compound from the mould proved to be beyond his resources and as a precaution, Fleming froze the mould he gathered and kept it in storage for future study. For a decade, no major progress was made in isolating penicillin as a therapeutic compound. During that time, Fleming sent samples of his precious Penicillium mould to anyone who requested it, with the hope that they might isolate penicillin for clinical use. For the next 16 years, Fleming himself continued to pursue research on improved methods of production of penicillin, its medicinal uses and clinical trials.


Although Fleming received most of the credit for his serendipitous discovery of penicillin, it was Howard Florey and Ernst Chain who actually made a useful and effective drug out of penicillin, after the task had virtually been abandoned by Fleming as being too difficult. They took up the challenge of confirming penicillin's therapeutic action and determining its chemical composition followed by the development of the necessary isolation, purification, concentration and mass production methods to produce the drug.

The far-reaching potential of the therapeutic benefits of penicillin were recognised by Australian pharmacologist and pathologist Howard Florey who returned to Oxford University in 1936 as director of the School of Pathology. There he recruited an interdisciplinary group of scientists to study disease and its medication.

By 1939 Florey had assembled a diverse team of professional staff including a dozen scientists plus technicians to work on an independent Oxford penicillin project. One of the first members of the penicillin team was a highly recommended biochemist Ernst Chain, a Jewish refugee from Nazi Germany who successfully developed a method of purifying penicillin from an extract from the mould. In the early production process many gallons of mold broth were used to produce an amount just large enough to cover a fingernail. Another team member, English biochemist and fungal expert, Norman Heatley, worked on growing Penicillium in high volumes and developed a bulk extraction technique for purifying penicillin.

In May that year Florey's group injected eight mice with a virulent strain of streptococcus and then injected four of them with penicillin; the other four mice were kept as untreated controls. Early the next morning, all the untreated control mice were dead and all the mice treated with penicillin were still alive. Chain called the results "a miracle". Their findings describing the production, purification, and experimental use of penicillin that had sufficient potency to protect animals infected with streptococcus, were published in The Lancet in August 1940.


Florey carried out the first clinical trials of penicillin in 1941 and the first patient was a police constable from Oxford. The patient started to recover, but subsequently died because, at the time, Florey was not yet acquainted with the potential dosage requirements, and had been unable to make enough penicillin to complete the treatment.


Despite Florey's success with the tests on mice, pharmaceutical companies in Great Britain at the time were unable to mass produce penicillin because of World War II commitments and the threat of bombing of their production facilities. Florey then turned to pharmaceutical companies in the United States, then still a noncombatant, for assistance in increasing production and furthering research. Together with Heatley he flew across the Atlantic in July 1941. Concerned about the security of taking a culture of their invaluable mould in a vial that could be stolen, they smeared their coats with the Penicillium strain as a security back-up on their journey. While there, they convinced four drug companies, Merck, Squibb, Pfizer, and Lederle Laboratories (now part of Pfizer) as well as the US Department of Agriculture (USDA) represented by a secretive, if not devious, chemist Andrew Jackson Moyer, director of its Northern Research Laboratory, to aid in the production of penicillin.

Ultimately the Penicillium mould was grown under precisely controlled temperature and pH conditions in deep fermentation tanks by adding carbohydrates including glucose, sucrose, crude sugars and other ingredients. The penicillin product was then separated from the fermented broth by activated carbon filtration to remove solids followed by liquid-liquid, solvent extraction using a butyl acetate solvent and evaporation to recover the penicillin. This increased the production efficiency of penicillin by around 500 times and by the time of the D-Day invasion of Normandy in 1944, a total of 21 U.S. companies had joined together to produce 2.3 million doses of penicillin in preparation for the conflict.


1942 proved to be a critical period in the medical usage of penicillin.

The first person to be successfully treated with penicillin was 33 year old American Anne Miller who lay dying at New Haven Hospital in March 1942, suffering from severe septicemia, (blood poisoning), following a miscarriage. During four weeks of treatment her temperature had soared to over 106°F (41°C), and no medications had been able to reduce the fever. At the time, penicillin was only available in minute quantities in research establishments and the U.S. government had tight control over key medicines during wartime. Miller's doctor John Bumstead was able to use personal government connections to procure roughly a tablespoonful of penicillin for his patient. This was half of the entire store of the antibiotic in the whole of the United States. The next day, her temperature was back to normal. She was cured and lived to be 90.

By mid 1942, the Oxford team produced the pure penicillin compound as a yellow powder.

In August 1942 at Oxford, Fleming successfully cured a second seriously ill volunteer, Harry Lambert, (a friend of his brother Robert), who had been diagnosed with terminal streptococcal meningitis.


Penicillin changed the course of medicine and enabled the successful treatment of previously serious and life threatening illnesses such as bacterial endocarditis, meningitis, pneumococcal pneumonia, syphilis and gonorrhea.

For their discovery and development of the new antibiotic penicillin, Fleming, Florey and Chain received the Nobel Prize in 1945.


Epilogue

Chain, who had trained in the German tradition of collaboration between academic research and industry, vigorously urged that a patent be sought on penicillin, as was usual in German research institutes. Florey on the other hand was reluctant to enter into such a commercial agreement on a discovery he presumed would benefit all of mankind. He saw it as a humanitarian effort which should not be exploited for personal gain.

Penicillin challenged the basic notion of a patent since it was a natural product produced by another living microorganism. Apart from the ethical issues, Fleming himself had not applied for a patent when he discovered penicillin in 1928 because at that time he did not yet have a patentable product.

Florey and others, including the head of the British Medical Research Council and the President of the Royal Society, opposed the application to patent penicillin on the grounds that patenting lifesaving drugs was unethical. The prevailing view in Great Britain at the time was that a process could be patented, but the chemical could not. British law did not cover the protection of natural products but innovations used in the production process to produce it could be patented. Bolstered by the external pressure, Florey prevailed and no application for a patent was made, much to the annoyance of Chain.

Things were different in the USA. Fleming and Chain were furious when they discovered that the American pharmaceutical company Merck, and individually, Andrew Jackson Moyer of the USDA, had each filed patents on the process of penicillin production which were granted with no opposition. This meant that British pharmaceutical companies manufacturing the antibiotic invented and developed in Britain, were obliged to pay millions of dollars in licencing fees to the United States for the use of the methods employed in their own production.


1928 Bell Labs engineer Ralph Hartley devised measures for quantifying the information content in electrical signals. He showed that a single pulse can represent M different, distinct messages given by

M=1+A/ΔV

Where A is the transmitted signal amplitude in Volts and ΔV is the resolution precision of the receiver, also in Volts.

He also showed that the data signaling rate R can be represented by

R=fplog2(M)

Where log2(M) is the information sent per pulse in bits/pulse

and fp is the pulse or symbol rate in symbols per second or Baud.


1928 Swedish born American engineer Harry Nyquist, working at Bell Laboratories, showed that a signal with a maximum frequency of B hertz or bandwidth B can be completely determined by specifying its ordinates at a series of points spaced 1/(2B) seconds apart. The minimum sampling frequency fs or 2B required, is known as the sampling rate or the Nyquist Rate and conversely, the maximum bandwidth B which can be represent by a sampling rate of fs is equal to fs/2 and is called the Nyquist Frequency. This was the basis of the Sampling Theorem, later formulated by Shannon, which states that a signal can be exactly reproduced if it is sampled at a frequency F, where F is greater than twice the maximum frequency in the signal. Very important for specifying the sampling rate in monitoring and control systems but also the foundation on what digital communications are based. Nyquist went on to develop stability criteria for feedback control systems.


1928 Another Swedish born American engineer John Bertrand Johnson working at Bell Labs identified the spectrum of random white noise found in electrical circuits as due thermal agitation of electrons in the conductors. His colleague Harry Nyquist (see above) showed that the maximum noise power P in Watts which can be transferred into a matched circuit is independent of the resistance and is given by:

P = K T Δf

Where K is Boltzmann's constant in Joules per degree Kelvin, T is the absolute temperature in degrees Kelvin and Δf is the bandwidth in Hertz over which the noise is measured.

Such thermal noise is now called Johnson noise.


1928 Rocket engineer Herman Potocnik a.k.a. Hermann Noordung born in Slovenia, published "Das Problem der Befahrung des Weltraums - der Raketen-Motor" "The Problem of Space Travel - The Rocket Motor" in which he was the first to envisage the possibility of geostationary artificial satellites and to calculate their orbits. He outlined the idea of orbiting manned space stations with the crew in radio contact with the ground. Both rocketry and radio communications were still in their infancy in those days and while Potocnik's ideas were interesting, there was no practical way of implementing them with the technology of the day. Though his book was translated into Russian and parts of it into English, it had little impact at the time and Potocnik died in poverty at the age of 36.

It was not until 1945 that the idea of worldwide radio communications using dedicated geostationary communications satellites was proposed by Arthur C Clarke.


1928 Indian physicist Chandrasekhara Venkata Raman, working at the University of Calcutta (Now Kolkata), discovered that when monochromatic light impinges on the molecules of the transmitting medium, the light beam causes the molecules in the medium to vibrate exciting them from a ground state to a virtual energy state. When the molecule relaxes it emits a photon as it returns to a different ground state. The frequency shift in the emitted photon away from the frequency of the original excitation corresponds to the difference in energy between the original ground state and this new state. (The photon energy E =   Planck's Law).Thus the photons in light beam may take some energy from, or impart some energy to, the molecules. The increased energy photons in the beam are manifest as a higher frequency spectrum component in the light and similarly the lower energy photons appear as a lower frequency spectrum line. The scattered light thus has a prominent spectral line corresponding to the original beam and additional spectral lines which are characteristic and unique to the molecules of the substance of the transmission medium. This property is used in chemical and physical spectroscopy to identify materials and also in forensic work by law enforcement agencies to detect drugs and other materials.

This energy scattering is known as Raman Scattering or the Raman Effect and the discovery was considered to be confirmation of the quantum effects of light.

Raman was awarded a Nobel prize in 1930 for his discovery.


1929 The kinescope, a cathode-ray tube with all the features of modern television picture tubes invented by Russian born American, Vladimir Zworykin, working for RCA. In 1923 while working at Westinghouse, Zworykin applied for a patent on the iconoscope, a tube based on Campbell-Swinton's proposal of 1911, designed to create the images in his early television cameras but it was not used commercially and the patent was not granted until 1938. Zworykin was told by Westinghouse"to find something more useful to work on". The imaging technology on which television cameras were based is in fact descended from Farnsworth's image orthicon but RCA's PR machine claimed that Zworykin laid the foundations of today's television systems in 1923, ignoring the contributions of Farnsworth, the farm boy from Idaho who is almost forgotten today.


1929 Initially following in the footsteps of Vesto Slipher and Harlow Shapley, American astronomer Edwin Hubble, working at the Carnegie Observatories in Pasadena, ably assisted by unschooled Milton Lasell Humason, formulated the empirical Redshift Distance Law of Galaxies, nowadays known simply as Hubble's Law. Hubble and Humason measured redshifts of more stars and the relative brightness of Cepheids in a number of distant galaxies and used Leavitt's period-luminosity relation and the inverse square law to determine their relative distances. He went on to plot the stellar redshifts against the associated distances and found that the redshift of distant galaxies increased in direct proportion to their distance. The fact that the more remote stars were moving faster explained why they were further away and showed that not only was the universe expanding but that at some time in the past, the entire universe would have been contained in a single point. This event was later estimated to be been approximately 13.7 billion years ago.

Hubble's Law is expressed by the equation v = H0D, where D is the distance to a galaxy with velocity v and H0 is the Hubble Constant of proportionality.

British astronomer Fred Hoyle who supported an alternative steady state model of the universe in 1949 sarcastically called this event the Big Bang and the name has stuck ever since.


Prior to the publication of Hubble's Law, the conclusion that the universe could be expanding had already been reached by others following a purely mathematical route but they had no evidence to prove it. Albert Einstein's general theory of relativity theory (and Newton's Laws) had been unable to explain why the gravity between all the matter in the universe did not cause it to contract. He overcame this paradox by inventing the cosmological constant, a mathematical fiddle factor for which there was no physical evidence, to justify a static model. It was a notional force pushing the universe apart which acted in the opposite direction to gravity on a cosmic scale, but not at short distances.


Rejecting Einstein's mathematical contrivance, Russian mathematician Alexander Friedmann investigated three possibilities, a contracting universe, an expanding universe and a steady state model and published his findings in 1922 in Zeitschrift für Physik. He ruled out the static model which he pointed out would be unstable since the movement of the slightest mass anywhere in the universe would destroy the equilibrium and lead to either an explosive expansion or a cataclysmic contraction. He explained that if the universe was started by some original great explosive force which blew it apart, its behaviour would depend on the magnitude of the force and the amount of matter in the universe. If the density of stars in the universe was low and the force was high, the stars would keep travelling outwards forever. If on the other hand the density was high and the force was small, inertia would keep the stars travelling outwards until gravity eventually takes over pulling them back and the universe would start to contract again.

Unfortunately Friedman did not live to see his predictions confirmed by Hubble. He died from typhoid fever in 1925 at the age of 37.


Independently, Monsignor Georges Henri Joseph Édouard Lemaître a Belgian priest, professor of physics and astronomer at the Catholic University of Leuven who liked to keep one foot in the church and the other in the observatory was studying the same gravitational paradox and came to similar conclusions to Friedmann. The notion that the universe could have originated at a precise point in time resonated with his Christian creationist beliefs. In 1927, two years before Hubble's discovery, he published "A homogeneous universe of constant mass and growing radius accounting for the radial velocity of extragalactic nebulae" in which he explained his theory of an expanding universe governed by a relationship similar to Hubble's Law. It was published in French in the Annales de la Société Scientifique de Bruxelles which was not widely read outside of Belgium, so it had little impact.

His ideas were however picked up by Arthur Eddington who invited him to talk about the relationship between the universe and spirituality at a meeting of the British Association in 1931. Lemaître explained that the expanding universe implied that going backwards in time, all the mass of the universe would have contracted into a single point which he called the Primeval Atom at a finite time in the past before which time and space did not exist. He compared the origin of the universe to "the Cosmic Egg exploding at the moment of the creation".


Until Hubble came along, there was no proof of Friedmann's or Lemaître's hypothesis. Einstein mocked Lemaître saying "Your calculations are correct, but your physics is abominable." Hubble's observations however proved that Einstein was wrong and Einstein admitted in 1930 that "The cosmological constant was the biggest mistake of my life."


1930 The works of British physical chemist John Alfred Valentine Butler and German surface chemist Max Volmer on the theoretical basis of kinetic electrochemistry were summarised in the fundamental Butler-Volmer equation. It shows that the current flowing at an electrode depends directly on the applied potential at the electrodes and is the sum of the anodic and cathodic contributions. It is also directly proportional to the area of the electrodes and increases exponentially with temperature. This electrochemical reaction.is more complex than a simple chemical reaction, which depends strongly on the temperature (See Arrhenius Law).


1930 Russian Wladimir Gusseff invents Electro-Chemical Machining (ECM) using electrolytic erosion, a galvanic process essentially the reverse of electroplating which allows the machining of complex shapes in very hard metals. The work piece forms the anode and the shaped tool forms the cathode and they are supplied with a low DC voltage of about 40 Volts. Electrolyte is pumped through the gap between the tool and the work piece and metal is removed from the work piece in the vicinity of the tool by galvanic action as in a battery. The flowing electrolyte removes the dissolved metal so there is no tendency for it to be deposited on the cathodic tool.


Note This is different from the more common machining process known as Spark Erosion or Electro-Discharge Machining (EDM). In this process the work piece and the tool are immersed in a bath electrolyte, however the gap between the tool and the work piece is fed with a high frequency pulsating voltage which creates a spark across the gap which in turn vaporises the metal of the work piece in the proximity of the tool. It was invented by Russian brothers B.R. and N.I. Lazarenko in 1943.


Both of the above processes are used to make the intricate shapes used in injection moulding tools.


1930 Lilienfeld gave a paper on electrolytic capacitors before the American Electrochemical Society in which outlined the fundamental theories and practice for the design of these components, still in use today.

Electrolytic capacitors have a very high capacitance per unit volume allowing large capacitance values to be achieved making them suitable for high-current and low-frequency electrical circuits. The construction is similar to a spiral wound battery with two conducting aluminium foils, one of which is coated with an insulating oxide layer which acts as the dielectric, and a paper spacer soaked in electrolyte, all contained in a metal can. The aluminium oxide dielectric can withstand very high electric field strengths, of the order of 109 volts per metre, before break down. This allows the use of very thin dielectric layers to be used and this in turn permits a much larger area of the capacitive plates to be accommodated within the space inside the case. These characteristics enable very high capacitance values to be achieved.

The foil insulated by the oxide layer is the anode while the liquid electrolyte and the second foil act as cathode. They are thus polarised and so must be connected in correct polarity to avoid breakdown.

Electrolytic capacitors can store a large amount of energy and are often used in battery load sharing applications to provide a short term power boost. See also Supercapacitors and Alternative Energy Storage Methods.


1930 22 year old student, Frank Whittle, attending the RAF officers training school at Cranwell, designed the world's first high power jet engine, a single stage machine with a centrifugal compressor for aviation applications, which his commanding officer arranged for him to show to the UK Air Ministry. The Air Ministry who had no experience in such matters passed the design for evaluation to Alan Arnold Griffith, an engineer at the Royal Aircraft Establishment. But Griffith had his own ambitions and alternative ideas for gas turbine propulsion preferring an axial flow turbine driving a propeller. He declared Whittle's design to be impracticable and so it lost the support of the Air Ministry. Undaunted Whittle persevered and patented his jet a few months later.

After Cranwell, he went on to study mechanical engineering at Cambridge University and while still there he was reminded in 1935 by the Air Ministry that the patent for his jet was about to lapse and that they had no intention to pay its renewal fee of £5. Short of cash himself and seeing no hope of his engine becoming a reality, he let the patent lapse.


Meanwhile at the University of Göttingen in in Germany, Hans-Joachim Pabst von Ohain, aware of Whittle's patent, started work in 1935 on his own design for a jet engine which he patented in 1936. It was quickly picked up by the Heinkel aircraft company who went on to manufacture von Ohain's designs.


In contrast, Whittle struggled to get financial support and, dogged by further unhelpful reports from A.A. Griffith, he received lukewarm support from the government but no money so he had to set up his own company "Power Jets" with private backers providing minimal funding to develop the engine. Desperately short of cash they managed to produce an impressive working prototype in 1937 when the government finally woke up to its importance. Despite the growing threat of war with Germany, still no government cash was forthcoming until mid 1938 when Power Jets eventually received a development contract worth £5,000 accompanied by grant conditions which made Power Jets subject to the Official Secrets Act making it difficult for them to raise further private equity.


Starved of funds, Power Jets were overtaken by the well funded Heinkel who flew their first jet aircraft the Heinkel He-178 on 27 August 1939. The first British plane powered by Whittle's jet was a Gloster which took off on 12 April 1941.


Suffering from ill health and mental strain, Whittle's reward for his pioneering work and personal sacrifices as part of the war effort was that his technology had been given to the USA as part of the Tizard Mission and his company was nationalised in 1944 for which he was offered no compensation since he had previously offered his shares to the Air Ministry. The government later relented and paid him off with the princely sum of £10,000.

Belatedly he was showered with honours, but not when he needed them most.


1931 Wallace Hume Carothers working at DuPont labs created Neoprene the first successful synthetic rubber. Neoprene's combination of properties, resistance to chemicals, toughness and flexibility over a wide temperature range made it suitable for the design of pressure vents which facilitated the construction of recombinant batteries and for gaskets used in battery enclosures. Searching for synthetic fibres Carothers also invented Nylon in 1935, now also used to produce a wide range of injection moulded components from containers to gears. In the USA, nylon stockings went on sale for the first time in 1940 and four million pairs were sold in the first few hours.


Carothers was a manic depressive alcoholic who, despite his great achievements, considered himself a failure. He founded and was head of Du Pont's research group working on polymers and polymerisation which was one of the most successful groups in the history of polymer science. He committed suicide in 1937 at the age of 41 by taking cyanide a year after his marriage and the untimely death of his sister.


1931 The portable Metal Detector patented by American engineer Gerhard Fischar. Metal detectors use a variety of methods to detect small changes inductance or perturbations in the local magnetic field when the detector is near to a metal object. See also Alexander Graham Bell's detector.


1931 Irish chemical engineer James J. Drumm introduces the alkaline Nickel Zinc Drumm traction battery after five years of development. A variant of the Michaelowski chemistry, they had a cell voltage of 1.85 volts and charge / discharge rates 40% higher than Nickel Iron cells with which they were intended to compete but they suffered from a low cycle life and high self discharge rate. Drumm built four trains to use his batteries but with the outbreak of World War II it became impossible to obtain both orders and raw materials and the company folded in 1940.


1931 French engineer H. de Bellescize applied for a UK patent for an improved homodyne radio tuning circuit. It was the first automatic frequency control (AFC) system and the first circuit to incorporate the basic features of a phase locked loop (PLL). The following year de Bellescize published a description of his design in "Onde Electrique", volume 11, under the title "La Réception Synchrone". The original homodyne receiver was designed in 1924 by a British engineer named F.M. Colebrook in an attempt to improve on Armstrong's superheterodyne receiver. Colebrook's design mixed the received signal with a locally generated sine wave at the same frequency as the carrier wave to extract the signal from the carrier in a simple detector - essentially a zero intermediate frequency (IF). De Bellescize improved on this by detecting any difference between the received carrier frequency and the local oscillator frequency and using the difference signal to adjust the oscillator frequency till it matched exactly the carrier frequency thus ensuring perfect synchronisation of the two signals and the desired zero IF. Further improvements to the design were made by British engineer D.G. Tucker and others and the tuner was renamed the synchrodyne.

The phase locked loop (PLL) is now a fundamental building block in synchronisation and control circuits and complete complete PLL circuits are available in low-cost IC packages.


1931 English engineer Alan Dower Blumlein invented stereo sound. A prolific inventor Blumlein made many advances in the field of acoustics and made significant contributions to Britain's first all electronic television service. During the war years he applied his considerable skills to Radar design. He died in a plane crash in 1942 at the age of 38 while testing the H2S Airborne Radar equipment for which he had designed many of the circuits. He was awarded 132 patents in his short life.


1931 American physical chemist Harold Clayton Urey, with research associate George M. Murphy, at Columbia University searching for potential isotopes of the common Hydrogen atom, reasoned that a heavier isotope would have a slightly higher boiling point than a light one and that if liquid Hydrogen was slowly evaporated, most of the heavy Hydrogen would remain in the liquid residue. They calculated that by carefully warming 5 litres of liquid Hydrogen, it could be distilled to 1 millilitre, in which the heavy isotope would be enriched by about 100 to 200 times.

To test this theory, in colaboration with Ferdinand G. Brickwedde a physicist at the National Bureau of Standards (NBS), Urey used low temperature fractional distillation to distill 5 litres of liquid Hydrogen down to 1 cc. He then examined this residue in a mass spectrometer which confirmed the presence of the heavy isotope and showed its weight to be double that of common Hydrogen. He later named this new isotope Deuterium.

At the time, the physics and chemistry of isotopes were not well understood. The common, stable, Hydrogen-1 atom was known to consist of one proton and one electron and initial speculation by various chemists was that the proton in this new found, so-called Heavy Hydrogen - isotope Hydrogen-2, was double the mass of the Hydrogen-1 proton. This was one year before the 1932 discovery of the neutron by James Chadwick which pointed the way to Urey's correct conclusion that the nucleus of the heavy Hydrogen atom was composed of one proton and an equally sized neutron and, more generally, that isotopes are atoms with the same number of protons but with a different number of neutrons.

The paper announcing the discovery of heavy Hydrogen, was jointly published by Urey, Murphy, and Brickwedde in 1932.


In 1932, Working with Edward W. Washburn chief chemist from NBS and his associates, Urey to continued to investigate the possibility that Deuterium could be present in some water molecules giving rise to heavy isotopes of water. Using a range of separation techniques including fractional distillation and electrolysyis (See next) to measure the density of water from a wide variety of terrestrial and oceanic sources, they discovered a range of densities of a few parts in a million due to small quantities of heavier molecules which he determined were due to the presence of a heavy water isotope. They also concluded that electrolysis was the most successful method of separating the water isotopes.

  • Electrolysis is the process of decomposing ionic compounds by passing a direct electric current through them, separating them into their constituent components.
  • An example is the electrolysis of water (H2O) in which the electric current decomposes the water into Hydrogen and Oxygen gases. If the water contains Deuterium atoms, the lighter water molecules containing the Hydrogen atoms will decompose preferentially to the heavier molecules containing the Deuterium atoms, leaving the heavier water molecules, now called Deuterium oxide (D2O) in the liquid residue.

At the same time, at the University of California, Urey's mentor Gilbert Lewis, under whom he had studied thermodynamics, was also investigating the separation of water isotopes using electrolysis and was the first to isolate a sample of pure heavy water by that means. His results were published in 1933 as "The Isotopic Fractionation of Water".


Heavy water makes up only a small part of naturally occurring water with only 156 Deuterium atoms per million Hydrogen atoms (0.015%). Pure Deuterium oxide has a density about 11% greater than regular water, a freezing point 4°C higher, a boiling point 1°C higher and a lower refractive index, but is otherwise physically and chemically similar. Since Deuterium is a stable isotope, heavy water is not radioactive.

At the time of its discovery there was not much demand for heavy water and very little supply. Early applications in 1934 were mostly for research by scientific institutions, one of the first of which was the investigation of its use as a biological tracer substance in experiments on human tissue.


In 1933 Leif Tronstad, a Norwegian professor of chemistry at the University of Trondheim, came up with the idea of producing heavy water at a large hydro electric plant under construction at Vemork near the Rjukan waterfall in Telemark in Norway. He was aware that heavy water could be produced by the electrolysis of water, but huge amounts of DC electric power would be required for industrial scale production. The Vemork plant was a large, 60 MW electricity generating plant designed to supply inexpensive hydroelelectric power to an adjacent plant which supplied Hydrogen produced by the electrolysis of water to a nearby artificial fertiliser manufacturing plant at Rjukan. The plant was owned by Norsk Hydro in alliance with German, IG Farben and UK, Imperial Chemical Industries and used the Haber process of Nitrogen fixation to produce ammonia (NH3) currently used in the production of the fertiliser and possibly explosives. The Haber process combined the unlimited supply of Nitrogen from the atmosphere with immense amounts of Hydrogen, which was produced by the electrolysis of water, to manufacture the ammonia. Here was an opportunity to create heavy water as a commercially viable by-product of Hydrogen production from residues in the Vemort plant's electrolysis stages.

Tronstad teamed up with Jomar Brun, the head of Norsk Hydro's Hydrogen eletrolysis plant, to extract heavy water using cascaded electrolysis and by December 1934, the plant was opened and Vemork became the world's first heavy water industrial production facility, ultimately capable of producing 12 tons per year, but initially producing just over 100 grams by January 1935.


Demand changed dramatically in 1939 however when von Halban and Kowarski, showed that heavy water could be used as a neutron moderator in a nuclear reactor and possibly nuclear weapons using natural Uranium fuel.

In April 1940, Germany invaded neutral Norway and took control of Norsk Hydro's Vemork plant forcing the workers to increase the output of heavy water to 4 Kg per day by the end of 1941.

In response, between 1940 and1944, the Vemort plant was the target of a series of daring sabotage actions by the Norwegian resistance movement and bombing by the Allied forces to keep the heavy water, and potentially nuclear weapons, out of the hands of the Germans during World War II. These operations succeeded in disabling the plant in early 1943.


In 1934 Urey alone was awarded the Nobel Prize in Chemistry for the discovery of heavy Hydrogen. However he shared his prize money with his collaborators Murphy and Brickwedde to whom he gave 25% each.

He later became a world expert on isotope separation and played a significant role in the Manhattan Project for the development of the atom bomb.


Meanwhile, in 1932, after Urey's discovery of Deuterium, the Hydrogen isotope with one extra neutron, Australian-born Marcus Oliphant, a physicist and Paul Harteck an Austrian physical chemist, studying under Rutherford at Cambridge University's Cavendish Laboratory, discovered two new isotopes namely, Tritium, (Hydrogen-3) a heavy isotope of Hydrogen with two extra neutrons and Helions (Helium-3) a lighter isotope of Helium with only one neutron, one fewer than the two of the common Helium-4 atom. They used a particle accelerator to bombard various targets with fast heavy Hydrogen (Deuterons) and while they were able to identify the isotopes produced, they were not able to isolate them at the time.

In 1934 Oliphant, together with Rutherford, speculated that the two isotopes they had discovered, Tritium with surplus neutrons and Helium-3 with a neutron deficit compared with the stable atoms, could possibly be unstable and could be made to react with eachother liberating more energy than they started with. This was the first indication of the possibility of nuclear fusion in the laboratory and it was this discovery of fusion and the energy it released that ultimately paved the way to the Hydrogen bomb.


The isolation of Helium-3 and Tritium were eventually achieved in 1939 by American physicists Luis Alvarez and Robert Cornog working at Berkeley University's Radiation Lab. It is also claimed by some that their experiments which included the direct reaction of Tritium with Helium-3 provided the first demonstration of actual nuclear fusion in the lab.

The pair also determined that Tritium was subject to radioactive decay.

Willard Libby pioneer of radiometric dating based on radioactive decaay of unstable isotopes, subsequently recognised that Tritium could also be used for the dating of water and wine.


1932 German electrical engineers Max Knoll and Ernst August Friedrich Ruska invented the first transmission electron microscope (TEM). One of the first applications of quantum mechanics theory, it depends on wave properties of the electron rather than it's particle properties. Instead of a light beam, it used an electron beam which has a wavelength much shorter than a light beam and can thus provide a much higher resolution. Focusing was by means of magnetic coils acting as lenses and by 1933 a magnification of 7000 times was achieved, far in excess of what was possible with visible light. The beam is detected after passing through a very thin specimen to create an image. It is now an essential tool for investigating the structure of materials.

Fifty four years later Ruska was belatedly awarded a Nobel Prize jointly with Binnig and Rohrer in 1986 in recognition of his fundamental work on electron optics and the invention of the electron microscope.

Knoll went on to invent the scanning electron microscope (SEM) in 1935 however the modern SEM was invented by Oatley in 1952.

See also STM


1932 First practical Fuel Cell system (Alkaline with porous electrodes) demonstrated by English mechanical engineer Francis Thomas Bacon, a direct descendant of Sir Francis Bacon, the 17th century philosopher.


1932 The Cavendish Laboratory's annus mirabilis. 34 Years after the discovery of the electron and the proton, and 21 years after Rutherford developed the planetary model of the atom, English physicist James Chadwick working under Rutherford, at Cambridge University's Cavendish Laboratory, finally isolated the Neutron confirming Rutherford's predictions of a heavy neutral particle twelve years earlier.

After Rutherford's further discovery in 1917 that by bombarding the atoms of certain elements with alpha particles from naturally occuring radioactive material they could be transmuted into atoms of a different element accompanied by the emission of high energy, positively charged particles, his experiments were repeated by researchers and experimenters to explore the phenomenon with different materials.

  • In 1928, Walther Bothe and his student Herbert Becker in Giessen, Germany found that using Polonium as the source of the alpha particles and the light metals Beryllium, Boron or Lithium as the target elements, the reaction produced unusually penetrating neutral radiation but since it carried no charge they were unable to identify what it was.
  • In 1932 Frédéric Joliot and his wife Irène Joliot-Curie, daughter of Marie Curie, investigated the radiation from Beryllium by directing it at a target of paraffin wax which is a hydrocarbon with a high in hydrogen content and hence a high density of protons. They found that the unidentified radiation caused protons to be emitted at high velocity from the hydrogen atoms in the paraffin wax. Their conclusion was that the radiation must be high energy Gamma rays which carry no charge.

Chadwick was unhappy with this conclusion because the Gamma ray's weightless photons would not have sufficient energy to dislodge heavy protons from the Hydrogen atoms. He reasoned that the radiation emanating by the Beryllium was in fact neutral particles, each with a mass about the same as that of a proton which would give them enough energy to scatter the protons from the paraffin target. He repeated the Joliiot-Curie experiments with a range of other elements as targets and by comparing the energies of recoiling charged particles from different targets, he proved that the Beryllium emissions contained a neutral component with a mass approximately equal to that of the proton. He determined that this component was a new elementary particle which he called the neutron.


Physicists soon found that the neutron made an ideal "bullet" for bombarding other nuclei. Unlike charged particles, it was not repelled by similarly charged particles and could smash right into the nucleus. Before long, neutron bombardment was applied to the Uranium atom, splitting its nucleus and releasing the huge amounts of energy predicted by Einstein's equation E = mc2. See Fermi (1942)

Chadwick was one of the many scientists who witnessed the Trinity test of the first atomic bomb in 1945.


In 1914 after gaining his masters degree at Manchester University, he won a scholarship to study beta radiation under Hans Geiger in Berlin and he travelled to there to take this up. Unfortunately, not long after he arrived, World War I broke out and he ended up spending the next four years in a German prison camp.

Chadwick was awarded the Nobel Prize for Physics in 1935 for his discovery of the neutron.


1932 Two more of Rutherford's researchers, English engineer and physicist John Douglas Cockcroft and Irish physicist Ernest Thomas Sinton Walton, constructed the world's first nuclear particle accelerator for investigating atomic structures, now known as the Cockcroft-Walton accelerator, or more colourfully as an atom smasher. It was a 750,000 Volt linear accelerator which they used to bombard a Lithium target with protons (Hydrogen nuclei) raised to an energy level of 750,000 electron Volts (750 KeV). It used a cascade of simple voltage multiplier circuits based on capacitors and diodes to generate the very high voltages needed in what is now known as the CW multiplier or CW generator named after the inventors. Like many UK university experimenters at the time they had to improvise because of a shortage of resources, using amongst other things car batteries, and for the glass cylinders surrounding the electrodes they used glass tubes from petrol pumps they used Harbutt's plasticine (children's modelling clay) to seal the joints in the vacuum tubes. Very high energies were needed to overcome the repulsion of the positively charged protons by the positively charged Lithium nucleus. The Lithium nucleus contains 3 protons and 4 neutrons. The high energy proton bombardment caused the Lithium nucleus to disintegrate into 2 alpha particles (Helium nuclei), each composed of 2 protons and 2 neutrons. This was the first disintegration of an atomic nucleus by controlled, artificial means, the first artificial nuclear reaction not utilising radioactive substances, the first use of a particle accelerator to split the atom and the first artificial transmutation of a metal into another element.

While it was claimed that Rutherford had already split the atom in 1917 using a radioactive source, he had however merely knocked a proton out of the nucleus. Cockcroft and Walton had actually split it in two.


The speeds of the resulting Helium nuclei were measured and their corresponding kinetic energy calculated. It was found to be equivalent to the reduction in the combined mass of the resulting Helium nuclei from the combined mass of the original lithium and Helium nuclei. In other words, the difference between the mass of the original Lithium nucleus and the combined mass of the two resulting Helium nuclei is equal to the equivalent binding energy released when the Lithium atom split apart. This was the first verification of Einstein's law, E = mc2.

See also Aston's similar work on binding energy and the mass defect.


The "Daily Express" headline on the news of their success was "The Atom Split, But World Still Safe".

Cockcroft and Walton were awarded the Nobel Prize for physics in 1951.


1932 Shortly after Cockroft and Walton's experiments (See previous items) American physicist Ernest Orlando Lawrence working at the University of California, Berkeley introduced the Cyclotron, a much more elegant and ingenious design for a particle accelerator. It consisted of two hollow "D" shaped electrodes, known as the "dees", resembling a flat, pancake shaped tin can cut into two halves, into which charged particles (ions or electrons) could be introduced. These electrodes were contained in a disc shaped glass vacuum chamber which was in turn held between the two poles of a powerful magnet creating a magnetic field perpendicular to the "dees". See Cyclotron diagram

When a high frequency, high power, alternating voltage is connected across the gap between the "dees" and charged particles are injected into the chamber near the centre, the particles move in a circular arc, at right angles to the magnetic field, in the plane of the "dees" due to the interaction of the moving charged particles (essentially an electric current) with the magnetic field. See Lorentz Effect in the page about electrical machines.

Each time the particles pass the gap between the "dees" they are accelerated due to the electric field across the gap. The frequency of the alternating electric field is timed so that its electrical polarity changes in exactly the time that the particles take to make half a revolution of the chamber so that the electric field across the gap is always in the same direction as the movement of the particles. In this way the particles trace a spiral path between the magnetic poles, receiving an energy boost each time they cross the gap between the "dees", gradually building up to very high energy levels. Because the particle beam starts with zero radius it does not need a source of high energy particles so that it can use a simple low kinetic energy ion source such as an ionised gas.


Because of the particle's spiral path, the cyclotron is many times smaller than the equivalent linear accelerator (LINAC).

Lawrence's first cyclotron measured just 11 centimetres (4.5 inches) in diameter and boosted Hydrogen ions to an energy of 80,000 electron Volts (80 KeV). The University of Vancouver's TRIUMF cyclotron, built in 1974, is 18 metres in diameter and can accelerate Hydrogen ions to up to energies of 520 MeV.


In 1939, Lawrence was awarded the Nobel Prize in Physics for his work on the cyclotron and its applications and chemical element number 103, discovered in 1961, is named "Lawrencium" in his honour.


More 1932 events - continued after "THEME"





THEME: Events and Developments in Particle Physics Relating to Quarks and the Strong Nuclear Force


See also the Standard Model of Particle Physics and the Timeline of Theories, Predictions and Discoveries to put the following discoveries into context.


After Chadwick's discovery of the neutron there was much speculation in the physics world about the nature of what became known as the strong force which held the nucleus together, preventing it from flying apart by the repulsive forces between the positive charges on the protons. It was also clear that the neutral neutron must also be bound by this force.


In 1932 Heisenberg proposed a theory that neutrons can be converted into protons and vice versa simply by passing an electon from one to the other. For this to work he needed to introduce a property analogous to electron spin which he called isospin. This does not mean that the particles are spinning. It means that the particle can have only two orientations, "up" or "down" in iso-space. This is just a two valued property, "positive" and "negative", of the particle in an abstract charge-space and does not imply spinning. Converting a neutron to a proton and back is equivalent to reversing the values. The notion of the electron exchange between the particles was abandoned but the concept of isospin remained. In summary, the strong force sees the positron and neutron as two states of the same particle, the only difference between them being their isospin.


Do not forget, that isospin like many other aspects of particle physics has a confusing name. It does not imply spin. It is just a mathematical model which can represent physical relationships including spin, but the model does not explain how or why these relationships occur.


In 1933 Hungarian-American physicist Eugene Wigner suggested that the electromagnetic force is not involved in holding the nucleus together, and that there are two different nuclear forces which he called the strong and weak nuclear forces. He discovered that the strong force binding the nucleons (the protons and neutrons) together is very weak when the distance between them is great, but very strong when they are close together as in the atomic nucleus. He also explained that the force between two nucleons is the same, regardless of whether they are protons or neutrons. See also Fermi and the weak force.

In 1937 he also put forward the idea that the total isospin of a system is the sum of the isospins of all particles in the system and that isospin is a gauge symmetry conserved in the system.

Wigner was awarded the Nobel Prize in 1963 for his contributions to the theory of the atomic nucleus and the elementary particles, particularly through the discovery and application of fundamental symmetry principles.


Different theories about the strong force were also suggested by Yukawa and others, but it was not until 1965 after Nambu and Han introduced the concept of colour charge and the associated colour force, supporting Gell-Mann's 1964 prediction of quarks, that a satisfactory theory was agreed.


1935 Japanese physicist Hideki Yukawa working in Kyoto predicted that there must be particles just like photons in the atomic nucleus exchanging the strong nuclear force between the protons and the neutrons to hold them together. He predicted that they would have a mass of about 250 times more than that of an electron but less than a proton which is 2000 times heavier than an electron. It was 12 years before his theory was validated.

Yukawa was awarded the Nobel Prize in 1969 for his prediction of the existence of mesons on the basis of theoretical work on nuclear forces.

In the meantime confusion arose due to the discovery of two different particles with about the same mass as candidates for Yukawa's predicted particle.


In 1936 the first particle proposed as satisfying Yukawa's predictions behaved like a heavy electron and was discovered by American physicists Carl D. Anderson and Seth H. Neddermeyer at Caltech while studying cosmic ray traces in a cloud chamber which they mounted between a pair of large magnets.

They noticed new particles that curved less sharply than the negatively charged electrons when passed through the chamber's magnetic field but more sharply than the heavy positively charged protons which curved in the opposite direction. Assuming that all particles were travelling with the same velocity and that the magnitude of their charges was the same, then the unique curvature of the tracks traced by the new particles could be explained if their mass was heavier than an electron but lighter than a proton. They called the particle a mesotron (from the Greek mesos meaning "in the middle"), later shortened to meson.

Calculations showed that the mass of Anderson's particle was within 20% of Yukawa's prediction. Yukawa, supprted by Robert Oppenheimer and others, claimed that this was the particle he had predicted. Some further confusion followed since the observed particle did not exhibit the properties expected of a strong force binding two nucleons together.


In 1937, experiments carried out by American physicists Jabez C. Street and Edward C. Stevenson at Harvard confirmed that the particle discovered by Anderson's team was not a meson, but was in fact a previously unknown fundamental particle which they called a muon, which is a lepton, like an electron, but 207 times heavier. It is not affected by the strong force and not related to Yukawa's prediction.

The name "meson" is now reserved for a special class of composite particles.


The second particle under consideration behaved more like Yukawa's prediction.

In 1947, Physicists, British Cecil Powell, Italian Giuseppe Occhialini and Brazilian Cesar Lattes, investigating cosmic particles at Bristol University, UK, isolated the particle carrying the strong force predicted by Yukawa. It was a composite particle, a meson which they called a pi meson or pion (π). After the discovery of quarks, it was confirmed that the pion consisted of two quarks, (a quark and an antiquark).

The pion is the lightest meson with a mass of around 270 times that of an electron and comes in 3 varieties distinguished by their electric charge = 1, 0 and -1 times that of the proton.

Nowadays "meson" is the collective name for all particles composed of two quarks, (quark and antiquark pairs).


Also in 1947, after two years of investigating the penetrating properties of high energy cosmic ray particles, George Rochester and Clifford Butler from Manchester University discovered a new particle which they named the K meson or kaon.

Their detector consisted of a Blackett's counter controlled cloud chamber located within a huge electromagnet weighing 11 tonnes which provided a magnetic field strong enough to deflect the highest energy particles. After many weeks of observations they discovered two unusual V shaped patterns among these tracks pointing back towards the source of the radiation. Each pair of tracks appeared from a single point as if from nowhere, and could only be explained assuming the existence of a new, but unobserved, neutral particle that produced no track on entry but which disintegrated into one positive and one negative particle. These were explained as being due to the spontaneous decay of hitherto unknown unstable, massive neutral particles, each weighing about half the weight of a proton or about 1,000 times that of the electron. Charged kaons of similar mass were also found.


After some scepticism by the scientific community and the paucity of the evidence, in 1949 they took their apparatus to the Pic du Midi, 2850 metres high up in the French Pyrenees, where they repeated their investigations with the higher incidence of cosmic particles at higher altitudes. This enabled them, between 1950 and 1951, to replicate the V shaped pattern almost on a regular basis providing decisive proof of their initial findings. In some cases the neutral particle resulting from the decay turned out to be heavier than the proton indicating the existence of yet another "strange" particle which they called the lambda.


At the same time Carl Anderson arranged for a cloud chamber to be taken to White Mountain, the highest peak in California to validate the tests and this confirmed Rochester's results. Nobody had predicted the existence of these particles whose "strange" properties did not exist in ordinary matter and their discovery, together with the pion created great excitement in the physics world.


After the pion, the heavier kaon was the second type of meson to be discovered. However the kaons also exhibited some unusual properties which the investigators thought to be "strange". Kaons were being created in interactions that happened almost instantaneously, with the characteristically short time of the strong force, but they decayed much more slowly taking about 1010 times longer than the process in which they were produced. (Even so this long delay by nuclear standards was only one tenth of a nanosecond).


In 1953 American physicist Murray Gell-Mann of Caltech and Japanese physicist Kazuhiko Nishijima independently explained this puzzling reaction and proposed that the behaviour of the "strange" new particles could be explained if they were carrying a new type of charge that is conserved in strong interactions but not conserved in weak interactions. Gell-Mann dubbed this property "strangeness" and this became a technical term with its own quantum number. He explained that when particles were produced from collisions between composite particles resulting from the strong force, strangeness would be conserved since strange particles would be produced in pairs with positive and negative strangeness. Although both particles would be unstable, they would not be able to decay by the strong force into particles with zero strangeness because the strong force preserves strangeness. They could however decay by the weak force if it did not preserve strangeness. This would account for the longer lifespan of the strange particles.

Strangeness" is thus conserved in strong interactions but violated in weak interactions.


1961 Gell-Mann working at Caltech made an early attempt to define the underlying patterns and relationships between the previously unknown particles recently discovered in cosmic rays or resulting from high energy collisions in particle accelerators, which by then numbered over 30. He followed in the footsteps of Mendeleev who, in 1869, had constructed the Periodic Table of the Elements by classifying the properties of all known chemical elements into groups and arranging the elements in a table showing the atomic weight of each element together with its group membership to seek potential patterns.

Gell-Mann carried out a similar procedure by grouping the new particles according to their quantum properties of charge and spin and found a pattern with two sets of eight particles, baryons with spin 1/2 and mesons with spin 0.

Gell-Mann had wide interests in history, archeology and linguistics and as a child prodigy he had entered Yale at the age of 15. With his fondness for whimsical names, he named the pattern "The Eightfold Way" after the Buddah's eight steps to Nirvana. At the time however, only seven mesons had been identified and just as the periodic table had been used to speculate on the existence of new elements, Gell-Mann predicted the existence of an eighth meson which was duly discovered a few months later by Louis Alvarez working at the University of California's Berkeley Lab. An equivalent theory was independently proposed around the same time by Israeli physicist Yuval Ne'eman.

Though the Eightfold Way brought some initial order to particle physics theory, it was eventually superseded by Gell-Mann's quark model (see next) which eventually became an integral part of the Standard Model.


In 1964 Gell-Mann and, independently, Russian-American physicist George Zweig working at CERN went on to postulate that baryons (protons and neutrons) were composed of triplets of very small, strongly interacting, fundamental particles which Zweig called "aces". Gell-Mann's name for these particles was "quarks" because he liked the sound of the name which was taken from a quotation in the novel Finnegans Wake, by James Joyce ("Three quarks for Muster Mark!"). Its only relevance to particle physics is the number "three". Nevertheless Gell-Mann's name caught on.

It was also predicted that mesons were similarly composed of these same fundamental particles but in the form of quark q and antiquark q pairs.

The proposed quarks had very unusual properties in that their charge and spin had fractional rather than integer values. At the time only three types (also known as flavours) of quarks were known also with fanciful names, the "up", "down" and "strange" quarks (u, d and s) with electric charges 2/3, -1/3, -1/3, respectively and spin 1/2.

  • The proton contains 2 up quarks and 1 down quark giving it a total charge of 1
  • The neutron contains 2 down quarks and 1 up quark giving it a total charge of 0
  • Mesons could be composed of a variety of quark / antiquark pairs such as uu, dd, ud, du and others. The possible total charge of the pair could be 1, 0 or -1, while the total spin will always be either 0 or 1 indicating that the meson also has the properties of a boson.

Apart from proton spin, the theory has been fully confirmed by experiment.

Unfortunately, experiments by CERN in 1987 to verify the spin of the proton showed that the proton spin was less then the total spin of its constituent quarks. This was known as the "proton spin crisis" and is considered one of the important unsolved problems in physics.

Nobody has actually isolated or seen a single individual quark since they are permanently confined within observable particles like the proton and neutron from which single quarks cannot escape due to the strong inter-quark (nuclear) force, later identified as the colour force, which holds the particle together.


Gell-Mann constructed his complex theoretical model, without the benefit of experimental evidence to guide him, in defiance of contemporary conventional wisdom or "facts":

  • that neutrons and protons are fundamental elementary particles and undivisable
  • that charges must be integral and fractional charges can not exist
  • and that particles could not be permanently trapped within known subatomic particles, unable to be isolated or observed.

He was proved to be right (mostly) and was awarded a Nobel Prize in physics in 1969 for his contributions and discoveries concerning the classification of elementary particles and their interactions. Zweig was also nominated but was inexplicably overlooked.


Later in 1964 American physicist Oscar W. Greenberg at the University of Maryland pointed out that having two identical quarks in the hadron's triplet of quarks violated Pauli's exclusion principle, a basic rule of quantum physics which does not allow a particle to contain more than one quark in the same quantum state. To overcome this problem he suggested that quarks should have three new degrees of freedom.


In 1965 Greenberg's idea was taken up by Korean-born American Moo-Young Han and Japanese Yoichiro Nambu who introduced the notion of a quantum colour charge with three possible values on the quarks, analogous to, but different from, the electric charge as one of electromagnetism's degrees of freedom. Each of these charges, and their corresponding forces, was later given the name of a colour, either red, green or blue to distinguish them from eachother and from the electromagnetic force, a naming scheme popularised by Gell-Mann. The colours can also be positive or negative. Analogous to the electromagnetic force, like-coloured charges repel eachother and different-coloured charges attract, but the three colour charges when combined result in a neutral charge. This compares to the combination of the three primary colours to produce white and explains why the name "colour" which was chosen to represent the charges. All particles made from quarks (hadrons) are colour and charge neutral. Colour is merely used as a label and quarks do not have an observable colour.

Because the antiquark has the negative version of the quark's colour, quarks attract antiquarks to form mesons such as the pion and the kaon. This also explains why the meson does not have an associated colour charge.


Han and Nambu also suggested that the gauge symmetry between quarks is their "colour" so that colour is their "conserved" quantity which cannot be created or destroyed. This compares with Quantum Electrodynamics (QED) in which "charge' is the conserved quantity.


In 1968 evidence of the existence of quarks was confirmed by a team at Stanford Linear Accelerator Center (SLAC) by deep inelastic scattering experiments in which they bombarded protons with high speed electrons. The electron in motion behaves like a wave whose wavelength varies inversely as its energy. A spectrometer could therefore distinguish between high energy short wavelength electrons and lower energy long wavelength electrons.

If a proton was a solid singlular particle, the electrons would bounce off the massive proton losing little energy in making the proton recoil and its remaining energy could be monitored in the spectrometer. If the proton however consisted of a quark triplet in a random orientation with each quark having its own inherent energy, field and motion, the energy of the recoiling electrons would be spread over a range of wavelengths depending on its impact with the quarks indicating that the proton had a substructure. This spread of energy could be measured by the spectrometers providing evidence of the scattering effect of the quarks as well as an indication of their energies.


In 1972 Gell-Mann suggested that Han and Nambu's new, tri-valued, mediating force gluing quarks together was a boson, a massless bundle of radiation which he called a "gluon" (glue on), another of his names which have stuck.

Together with fellow physicists, German, Harald Fritzsch and Swiss, Heinrich Leutwyler, Gell-Mann developed further, the concept of a colour charge as the source of a strong force holding together not just the quarks but also the atomic nucleus. They investigated the properties of the gluons, the colour force exchange particles which provide the strong "colour forces" acting between quarks, holding them together to form observable "white" objects like the neutron and the proton, and at the same time holding those protons and neutrons together to form heavier nuclei.

They determined these gluons to be "flavour neutral", interacting in the same way to all three generations of quarks, but "colour sensitive" since only different coloured quarks attract each other via the strong interaction. Thus each gluon had to carry a unique combination of both a colour and an anticolour charge (such as red-antiblue, blue-antigreen, or green-antired charges). With three colours, there are six possible "different" colour-anticlour pairs of different colours, each gluon only reacting to its own specific coloured pair of quarks so that six different gluons are needed. A further two more gluons are required for interactions involving colour-anticolour pairs of the "same" colour (such as red-antired). Thus a total of eight gluons are needed to cater for all possible interactions.


Gluons are massless like the photon and do not carry the electric charge, but unlike the photon they carry the colour charge. Thus they feel the colour force and can therefor react strongly with quarks as well as with other gluons radiating further gluons. This strong force keeps the gluons "confined" together with the quarks inside hadrons. Unlike the photons which can roam free, the gluon's range of influence is limited to a "femto-universe" of 10-15 m or 1 femtometre in radius. Leptons such as the electron which don't feel the strong force are also free to roam.

(Confusingly, the familiar table summarising the properties of particles in the Standard Model shows the gluon as having no charge. This is because, by convention, the "charge" mentioned in this model is the "electric charge" not the "colour charge". Similarly the charges shown to be associated with quarks are the electric charges not the colour charges which are omitted from the common version of this diagram.)

Only colour neutral particles can exist in isolation, all hadrons therefore have a net colour charge of zero.


Nobody has ever seen a gluon, the quantum particle associated with the strong "colour" force between quarks, or even a quark. Quarks are locked up inside hadrons in a state known as confinement. They can however advertise their presence indirectly by generating jets of particles in high energy collisions. For example an electron and positron annihilate each other creating a quark and antiquark pair. Initially this was thought (incorrectly) to involve the exchange of a photon. If the collision energy is high enough the quark and antiquark fly apart, degenerating into hadrons such as pions and kaons which are emitted as two "jets" radiating outwards in the same plane from the collision point.

In 1976 British physicists John Ellis and Graham Ross together with American physicist Mary Gaillard proposed that theoretically, very high energy electron-positron collisions would result in three co-planar jets. Two of these jets correspond to a quark-antiquark pair, while the third corresponds to a gluon. The quarks and gluons however quickly decay into into more hadrons so that all three jets are composed of hadrons.

In 1979 these "three jet" events were detected by the TASSO team working on the PETRA particle accelerator at the Deutsches Elektronen-Synchrotron (DESY) in Hamburg. They provided the first direct experimental evidence for the existence of gluons, the carriers of the strong nuclear force.


In 1973 American physicists David Gross and David Politzer working at Princeton University and Frank Wilczek working independently at Harvard, published the asymptotic freedom theory of strong interactions. They discovered that the "colour charge" or strong force pulling the quarks together actually increases with distance as the quarks are separated, something which seemed completely contradictory. Conversely, as the quarks move closer to each other, the force between them becomes weaker such that at very close range the quarks behave almost as free particles. This phenomenon of reduction in force is known as "asymptotic freedom". Separating the quarks takes a massive amount of energy which may be applied in a collision with another particle or by other means. If this energy is high enough, the hadron containing the quarks may split apart, but the quarks are not released. Instead the energy applied to separate them is converted into mass as per Einstein's theory and appears as new quark-antiquark pairs (mesons) or other hadrons.

This discovery was an important initial step towards a new theory, Quantum ChromoDynamics, QCD.


The trio were awarded the Nobel Prize in Physics in 2004 in recognition of their work on this topic.


In 1973 Gell-Mann, Fritzsch and Leutwyler continuing their study of quarks and gluons gave these studies the name Quantum Chromodynamics (CQD) because of the significance of "colour" in the behaviour of the quarks. This is different from QED, where there is only one photon which carries the electromagnetic force but is not electrically charged and does not react with other photons or radiate further photons.

The QCD theory was a major step forward in the development of the Standard Model.


In 1974, at a meeting at Stanford's SLAC on November 11, Burton Richter from SLAC announced the results of his experiments with high energy electron - positron collisions at SLAC and, at the same meeting, Chinese American, Samuel Ting from Brookhaven at Upton New York also announced the results of his own investigations into the interactions of high energy protons on a beryllium target. By coincidence both experimenters had produced a stream of new particles with a resonance spike in the number of particles formed with an energy of 3.1 GeV giving the particles a mass of three times the mass of a proton. The particles also had a lifetime of 10-20 seconds which is a thousand times longer than expected for a such a heavy particle.

It turned out that they had independently produced the same new particle. Investigations showed that this was a meson consisting of a charm quark and an anticharm quark, the first evidence of the existence of the charm, the fourth quark, predicted by Glashow and others. It also confirmed the existence of a second generation of fermions.

Richter had chosen the symbol "ψ" for his particle while Ting had chosen the letter "J" which is similar to the chinese character representing his name Ting in Chinese. The meson has been known since as the "J/ψ" meson. Their surprising discovery, and the better understanding of the physics involved, stimulated a growing interest in particle physics.


Richter and Ting were awarded the Nobel Prize in Physics for their discovery of the J/ψ and charm particles.


In 1977 the upsilon (Υ) meson and the bottom quarks were discovered by a team led by Leon M. Lederman at the Fermilab at Batavia near Chicago. In an experiment similar to the discovery of the J/Ψ meson by Ting, he directed a beam of higher energy protons from a 400 GeV accelerator onto a beryllium target. This produced a similar stream of particles to Ting's, but this time with a resonance spike with an energy of 9.46 GeV, equivalent to a mass of ten times that of a proton, and a lifetime of 1.21 x 10-20 seconds. These particles were likewise determined to be mesons, which they named upsilons (Υ), and in this instance they were composed of the bottom quark and its antiquark.


It is typical of all heavy particles that it takes very high energy collisions to create them, and such collisions usually result in debris from unwanted reactions consisting of a large quantity of particles, often of several different types, spraying in diverse directions. At the same time, heavy particles are typically unstable, decaying quickly into smaller particles. This creates several challenges for the particle detector which must isolate particles from the desired reaction and filter out particles from every other reaction. In the Fermilab experiments, only one upsilon is produced for every 100 billion protons which strike the target, and the experimenters had to isolate and identify this single event. This detection is made much more difficult because of the very short lifetime of the heavy meson which decays into other particles almost immediately after it is formed.


As with other detectors used for analysing the results of high energy collisions, such as SLAC's tau detector, the presence of particles with very short lifetimes could only be inferred from an analysis of the debris from the collisions to discover the signature of the interactions associated with the particle of interest. This incudes the separation (filtering) and detection of the different particles predicted to arise from the main interaction as well as the particles resulting from the decay of the expected particle of interest.

Lederman's particle detector used hadron absorbing materials to eliminate "background" materials, resulting from the collisions, from the upsilon search. Beryllium would have been the optimum absorber but despite scouring the nation, they could only find a total of two tons of this material which was both expensive and very scarce at the time. Most of the unwanted particles were therefor absorbed by a 6 foot (2 m) deep tungsten dump from which the remainder emerged in a narrow beam. Particles at larger angles (three to six degrees) entered an 18-foot (6m) filter containing about 12 cubic feet (0.34 m3) of the beryllium weighing two tons. The beryllium absorbs strongly-interacting particles, such as pions and protons, but passes muons with a minimum of scattering, permitting accurate measurement of their trajectories and momenta with magnets and particle detectors after the target box.

The muons emerging from the beryllium filters were deflected by two large magnets and their paths were recorded in large multi-wire detectors. 20 million records of possible muon pairs were picked up from the detectors and coded electronically by a small computer and stored on magnetic tape. The presence of the upsilon was revealed from a search for patterns in this data.


In 1995 the top quark was isolated for the first time by the collaborating CDF and DZero teams working on Fermilab's Tevatron proton - antiproton collider. Named after the Tevatron's two alternative detectors, each team was composed of around 450 "top" physicists from reasearch establishments around the World. The top quark is so heavy that its discovery had to wait until the commissioning of the Tevatron, 17 years after the discovery of the bottom quark. At the time, the Tevatron was the only accelerator capable of producing particles with sufficient energy and equipped with particle colliders and detectors capable of tracking the results of its high energy collisions.


At first glance the task may appear impossible. The expected, (and later confirmed), mass of the top quark is almost the same as an atom of gold at around 175 GeV/c2 which is 185 times the mass of the protons in the beam used to create it.

How can two protons create such a heavy object?

The answer is that each of the protons is moving extremely close to the speed of light with extremely high energies. In a collision however when their speed is reduced to around zero, Einstein tells us that much of the energy is converted into mass. This allows proton - antiproton annihilation to produce daughter particles, such as top quarks with a mass of 175 GeV/c2, much heavier than the original protons.


The Tevatron accelerated beams of protons and antiprotons to 99.999954 percent of the speed of light, with an energy of 900 GeV, in opposite directions around a ring shaped collider four miles (6.3 km) in circumference.

The two beams collided head on with a joint energy of 1.8 TeV at the centres of two massive 5,000-ton cylindrical shaped detectors standing three stories high and located at two different positions around the beam pipe. The 1.8 TeV collision energy was well above the electroweak unification energy state estimated to be around 100 GeV which prevailed in the Universe shortly after the Big Bang so that it was the electroweak force that applied to the reaction between the particles involved at the instant of the collisions.


The production of top quarks can happen in several ways.

In the CDF collider, the particle collisions initially produce a highly energetic gluon which decays into top - antitop quark pairs, similar to Lederman's production of the Upsilon meson. These pairs in turn rapidly decay into other particles.

In the DZero collider, single top quarks are produced directly when an intermediate W-boson, which has zero mass in its high energy electroweak state at the instant of the collision, decays into a top and antibottom quark or they may result from the transformation of a bottom quark (probably created in a pair through the decay of a gluon) into a top quark by exchanging a W-boson with an up or down quark.


The huge Tevatron detectors placed in concentric layers around the beam pipe incorporated a variety of technologies to identify the different particles resulting from the 8 million collisions per second in the colliders.

  • Short lived bottom quarks and other particles were detected by high resolution silicon strip detectors mounted around the beam pipe as close as possible to the collision point. They were constructed from long parallel strips of closely spaced diode materials etched into silicon wafers and connected to a central computer which recorded the magnitude and timing of the electrical impulses arising when a particle landed on the diode strip.
  • A second layer of tracking chambers surrounded the inner silicon detector layer.
  • The CDF detector used multi-wire drift trackers which were based on an array of wires each carrying a high voltage within a gas filled chamber with conductive walls held at ground potential. Charged particles passing through the chamber ionise the surrounding gaseous atoms and are accelerated by the electric field across the chamber causing a cascade of charged particles to collect on the individual wires as they approach, producing a charge on the wire which is fed to a computer as in the silicon detector.

    The DZero detector used fibre trackers, filled with scintillating fibres. When partices cross the fibres they generate light which is propagated by the fibre to diode, visible light photon counters and hence to the computer processing system.

    In both the CDF and DZero trackers, magnetic fields are used to chart the path of charged particles such as protons or electrons drift through the chamber and their momentum can be deduced from the curvature of their paths. Particles with very high momentum leave a relatively straight path, while particles with low momentum leave small spirals.

  • Jets of particles without a charge such as photons do not leave tracks. Calorimeters in the next layer however measure the energy of showers of particles instead by completely absorbing them.
  • The DZero uses a set of precise uranium based calorimeters filled with liquid argon as the active medium for this purpose.

    In the CDF detector, charge free particles as well as lighter particles such as electrons and other leptons are captured by electromagnetic calorimeters which measure their energy. The CDF calorimeter is constructed from sheets of a plastic scintillator material which absorb energy and emit light, sandwiched between 3/4-inch (20 mm) layers of lead. The lead stops the particles, and the scintillator picks up the energy they deposit

  • Similarly in the next layer, the heavier hadron jets resulting from the collisions are captured in hadron calorimeters to determine their energy. The CDF detector uses steel instead of lead in its scintillator sandwich.
  • The highly energetic muons, like electrons but 200 times heavier, pass right through all of these layers and penetrate the massive outer steel casing of the detector which absorbs the remaining particles. Housed within this steel casing is an array of muon detectors which are similar in operating principle to the multi-wire detector used in the second layer, except that they have only a single wire in a gas filled Aluminium cylinder. The charged muons ionise the gas charging the wires and the resulting signal is carried by the wires to the computer which counts, measures and records the intensity of the events. Layers of scintillator materials, whose fast acting properies in converting the muon's energy to light provide a more accurate measure of the timing of the particle drift in the chamber, are placed behind the muon chambers.

  • For comparison, see CERN's CMS Particle Detector.


To avoid destructive rivalry between the CDK and DZero teams, Fermilab director John Peoples insisted on joint declaration of all papers (amounting to around 400) relating to the discovery of the top quark. Most started with several pages of names of the participants.


See also Leptons and the Weak Nuclear Force



More 1932 events before the above Theme Panel


1932 Russian physicist Igor Tamm proposed the concept of the phonon, a quantum of vibrational or kinetic energy, analogous to the photon, which is a quantum of light energy. These energy bundles represent the molecular vibrational state or the kinetic energy of a vibrating crystal lattice whose average kinetic energy is measured by its absolute temperature. Electrical and thermal conductivity can be explained by phonon interactions. Like photons, phonons have the characteristics of both waves and particles.


1932 G.W. Heise and W. A. Schumacher construct the first zinc air battery. High energy density primary cells, they were used to power Russia's Sputnik 1 in 1957.


1932 Sabine Schlecht and Hartmut Ackermann working in Germany invent the porous sintered pole plate which provides a larger effective electrode surface area and hence lower internal impedance and higher current capabilities bringing about major improvements to Nicad battery design.


1932 Following on the theoretical work on distortion reduction by means of feedback control systems by his colleague Harold Black at Bell Labs, Harry Nyquist proposes a method for determining the stability of feedback control systems. Known as the Nyquist stability theorem it was developed from the study of the behaviour of negative feedback amplifiers but it has universal applicability being applied to mechanical systems (position, speed, temperature, pressure controls) as well as electrical systems (voltage amplitude, frequency and phase controls) and even non physical models such as the national economy. It is used as a development tool to ensure stability of electronic control and protection circuits.

See also Closed Loop Control Systems for an explanation and Airy for a description of earlier systems.


1932 Fibreglass, like glass, has been "invented" many times over. The first glass fibres of the type that we know today as fibreglass were discovered by accident by Dale Kleist working at Corning Glass. While attempting to weld two glass blocks together to form an airtight seal, a jet of compressed air unexpectedly hit a stream of the molten glass and created a shower of glass fibres indicating an easy method to create fibreglass. Fibreglass insulation had been patented in 1836 by Dubus-Bonnel, produced in volume by Player in 1870, patented again by Hammesfahr in 1880 and re-invented by Boys in 1887, however Russel Games Slayter of Owens-Corning was granted a patent for "Fiberglas"; in 1938.

The term 'fibreglass' is often used imprecisely for the composite material glass-reinforced plastic (GRP).


A fibreglass mat is an essential component used to absorb and immobilise the acid electrolyte in AGM Lead Acid batteries. Fibreglass composites are also used extensively for high power cell and battery casings.


1933 The "Dassler patent" recognized the oxygen cycle and recombination as fundamental principles of the sealed Nickel-Cadmium battery.


Research into improved Nickel-Cadmium batteries by Schlecht, Ackermann and Dassler was driven by the need for light weight aircraft starting batteries.


1933 Walter Meissner and Robert Ochsenfeld discovered that when a superconducting material is cooled below its critical temperature magnetic fields are excluded or repelled from the material. This phenomenon of repulsion was discovered by Faraday and is known as diamagnetism. The low temperature effect is today often referred to as the "Meissner effect".


1933 The first injection moulded polystyrene articles produced.


1933 Reginald O. Gibson and Eric William Fawcett, ICI chemists produced Polyethylene a polymer of ethylene gas. Like many chemical developments it was discovered by accident, this time while reacting ethylene and benzaldehyde at high pressure. Now used extensively in the electrical industry as an Insulator ICI gave it the name Polythene.


1933 Radio pioneer Armstrong patented Frequency Modulation (FM radio) as a way of reducing interference on radio transmissions. Since most electrical noise produces amplitude variations in the signal, Armstrong's system involved varying the frequency of the radio carrier wave (rather than the amplitude as in AM radio) in synchronism with the amplitude of the voice signal. By clipping the noisy signal the noise can be eliminated. The idea which revolutionised radio reception was at first rejected then stolen by his old friend David Sarnoff the founder and CEO of RCA in which Armstrong was a major shareholder.

Armstrong had previously fought a legal battle all the way to the U.S. supreme court over his 1912 invention of the regenerative radio receiver which amplified weak radio signals by feeding them back through a triode amplifier valve (tube). However in 1920 when the value of Armstrong's invention became known, Lee De Forest claimed ownership of the regeneration principle because it used his audion vacuum tube. Unfortunately after 12 years of litigation, the supreme court, not familiar at that time with such technical distinctions, found in De Forest's favour.


Like Farnsworth before him, Armstrong suffered at the hands of RCA. Short of funds and faced with more years of costly and heartbreaking litigation against former friends over his FM patents, in January 1954 Armstrong put on his hat, his overcoat and his gloves, stepped onto the ledge of his 13th floor apartment building in New York City and plunged to his death. His wife who had contributed to Armstrong's depression by refusing to help fund his litigation against RCA, continued it herself and eventually won.


1933 US patent awarded for flexible printed circuits made by Erwin E. Franz by screen-printing or stenciling a paste loaded with carbon filler onto cellophane, followed by a copper electroplating step to reduce the resistance. He also proposed using flexible folding circuits for windings in transformers.


1933 Pondering over Chadwick's recent discovery of the neutron, and Cockroft and Walton's splitting of the Lithium atom into two alpha particles, Hungarian physicist Leo Szilárd, a Jewish refugee from Germany now working in London, conceived the possibility of a neutron chain reaction and its associated critical mass. He wondered what might happen if the two experiments were combined and neutrons rather than alpha particles were used as projectiles for the bombardment of nuclei and what would happen if this produced extra neutrons. In 1933, the periodic table had many gaps and there were mainy new elements still to be discoverd. In his own words Szilárd said - "It occurred to me that if we could find an element which is transformed by neutrons and which would emit two neutrons when it absorbed one neutron, such an element if assembled in sufficiently large mass could sustain a nuclear chain reaction". He also thought it could lead to the production of new isotopes as well as the release of useful energy. The possibility of an explosion under certain circumstances was also considered but more in the context of a risk rather than an opportunity.

In 1934 he filed a UK patent application for "Improvements in or relating to the transmutation of chemical elements" outlining his theories, even though he had not yet identified a suitable element capable of supporting the expected non-fission chain reaction, nor had he carried out any experiment to prove the concept. Neither did he propose fission as the mechanism for his chain reaction, since the fission reaction had not yet been discovered, or even suspected.

In 1936, he assigned his chain-reaction patent to the British Admiralty to ensure its secrecy. His ideas for a non-fission nuclear reactor however did not, in fact, prove practical. Nevertheless, this sad fact did not stop others in later years from crediting Szilárd erroneously as the inventor of the atomic bomb.


Fearful of German intentions with nuclear weapons and disturbed by the lack of American action, in 1939 Szilárd persuaded Albert Einstein to write to President Roosevelt, urging him to initiate an American atomic weapons programme. In 1943 he was rewarded for his pains by Major General Leslie Groves, leader of the Manhattan Project designing the atomic bomb, who forced Szilárd to sell his atomic energy patent rights to the U.S. government.


In like manner, in 1942 the Russian nuclear physicist Georgy Nikolaevich Flerov noticed that articles on nuclear fission were no longer appearing in western journals from which he concluded that research on the subject had become secret, prompting him to write to Premier Joseph Stalin insisting that "we must build the Uranium bomb without delay." Stalin took the advice and appointed Igor Vasilevich Kurchatov, director of the nuclear physics laboratory at the Physico-Technical Institute in Leningrad, to initiate work on Russia's bomb. Their first nuclear bomb was finally tested on 29 August 1949 near Semipalatinsk on the steppes of Kazakhstan. Flerov and Kurchatov both received the Soviet Union's highest award, the title of Hero of Socialist Labour and the Gold Star medal.


1934 Husband and wife physicists Frédéric Joliot and Irène Joliot-Curie, daughter of Marie Curie, realised the alchemist's dream of transmuting one element into another. Investigating the structure of the atom, they discovered that by bombarding natural stable isotopes of certain common elements with alpha particles (Helium nuclei) they could create radioactive isotopes of other elements not normally radioactive. Thus radioactive isotopes of Nitrogen could be obtained from Boron, Silicon from Magnesium and Phosphorus or Silicon from Aluminium. These radioactive isotopes, not found naturally, decompose spontaneously with the emission of electrons, positrons or neutrons.

Taking the case of bombarding Aluminium with alpha particles (4He2) as an example, there are two possible outcomes represented by the two following equations:

(See an explanation of the notation here)

  • The unstable isotope Silicon-30 is created and a proton (Hydrogen nucleon) is emitted.
  • 4He2 + 27Al13 30Si14 + 1H1

  • The unstable isotope Phosphorus-30 is created and a neutron is emitted.
  • 4He2 + 27Al13 30P15 + 1n0

Previously the only way to obtain radioactive elements was to painstakingly extract them in tiny amounts from their natural ores at considerable cost. With the Joliot-Curies' discovery of induced radioactivity, useful radioactive isotopes were now relatively easy and inexpensive to produce, and they rapidly became important tools in biomedical research and in the treatment of cancer and other medical conditions. The production of neutrons also provided an important tool for studying the atomic nucleus and releasing its energy.


The couple were awarded the 1935 the Nobel Prize for Chemistry for the discovery of artificially produced radioactive isotopes.


The same year, Italian physicist Enrico Fermi working at the University of Rome investigated the bombarding of the atomic nuclei of heavy metals, like the Joliot-Curies, but with neutrons rather than alpha particles. Like them, he also determined that elements such as Gold which were not normally radioactive would be made radioactive when a neutron was absorbed by the Gold nucleus increasing its atomic mass by one from 197 to 198. The resultant isotope with its extra neutron was highly unstable and hence radioactive with a very short half-life of a few minutes. This radiation or decay is due to a neutron in the nucleus transforming into a proton, with the emission of an electron (beta radiation). This extra proton increases the atomic number of what was the Gold nucleus by one from 79 to 80 transmuting it into the heavier Mercury, but its atomic mass remains unchanged.

The transformations can be represented by the following equations:

  • The neutron is absorbed by the Gold atom creating an isotope of Gold.
  • 1n0 + 197Au79 198Au79

  • The unstable Gold isotope breaks up by beta decay, creating Mercury and emitting an electron.
  • 198Au79 198Hg80 + e

The process of neutron bombardment was later used to create heavy radioactive elements such a Plutonium used in nuclear weapons.


Neutron Energy: Also in 1934, as part of his investigations of neutron bombardment, Fermi placed a 5 cm (2 inches) thick slice of paraffin between his neutron source and the target atoms and was astonished to find that this had a dramatic effect on the induced radioactivty of the target with the emission rate increasing by a factor of 100 or more. He determined that the energy of the neutrons emerging from the paraffin was probably much less than one thousandth of the energy of the energetic or fast neutrons entering it, having been slowed down by the Hydrogen nuclei in the paraffin. Paradoxically these slower neutrons had an increased effect on the atoms in the target mass increasing the probability of a radioactive reaction with the emission of more fast neutrons. This was because the slower neutrons were more likely than faster neutrons to be absorbed and captured by the target atoms causing more of them to become unstable thus increasing the overall nuclear transformations in the target and consequently increasing the energy released. This was contrary to the prevailing view that faster bombarding particles would create greater induced radioactivity in the target. While this conventional view was however true for bombardment with alpha particles, the opposite was true for neutrons.

Fermi tested other materials and confirmed that the energy or speed of neutron particles could be "moderated" or reduced by passing through certain materials, now called moderators and this in turn increased their efficiency in transforming their target atoms causing the release of more high energy particles and possibly leading to a chain reaction. The slow neutrons are also called thermal neutrons since their energy is more comparable to the energy released in chemical reactions rather than nuclear reactions.

Fermi's discovery of neutron-absorbing moderators and their ability to either slow down fast neutrons or to limit their uncontrolled multiplication, while at the same time increasing their efficiency in producing more high energy neutrons from the target atoms, enabled nuclear reactions to be both promoted and controlled which is the basis for the design and operation of nuclear reactors. Typical materials used for moderators in modern power reactors using enriched Uranium fuel are graphite and water.


Fermi was awarded the 1938 Nobel Prize in Physics for his discovery and production of transuranic elements (man-made elements heavier than Uranium) and his work on the effect of slow neutrons on nuclear reactions.


He was appointed the first director of the US Argonne National Laboratory, and also led the team that built the world's first controlled nuclear reactor in 1942.


1934 Lead acid batteries with gelled sealed cells were first manufactured by Elektrotechnische Fabrik Sonneberg in Germany.


1934 Invention of the transformer-clamp by Chauvin Arnoux, the very first current measuring clamp.


1935 Radio detection of ships at sea had been pioneered by Hülsmeyer in 1904 but the potential of the technology had not been developed. The first practical Radar (RAdio Detection And Ranging) system was produced by the Scottish physicist Robert Alexander Watson-Watt, a direct descendent of James Watt the pioneer of the steam engine. As fears of an impending war grew, he had been tasked by the Air Ministry to come up with a radio death ray to disable enemy aircraft, however he informed them it was not possible and proposed instead the system we now call Radar for detecting the presence of aircraft before they came into sight. This was accomplished by sending out powerful radio pulses and detecting their return after reflection by the aircraft and computing the distance from the time it took the pulses to return. Large directional antennas were used to concentrate the signals and provide an indication of the bearing of the target. Being a two way system, one of the major problems he had to overcome was to get very sensitive receivers to work in close proximity to very high power transmitters without being swamped. Watson-Watt received a knighthood in recognition of his achievements.

Ironically, after the war, Watson-Watt was amongst the first unsuspecting drivers to be caught in a Radar speed trap.


In 1935 and 1936 Watson-Watt also filed patents for the Identification - Friend or Foe (IFF) system which was able to distinguish radar signals reflected from friendly aircraft, from the confusing mass of signals returned by both friendly and hostile aircaraft flying at high speed and high altitudes which made visual identification impossible. Initially, the system was based on a dipole antenna, mounted on the aircraft and tuned to the frequency of the transmitting radar system. This antenna acted as a passive transponder by resonating with the radar signal illuminating the aircraft thus modifying and amplifying the returned radar pulse so that it could be distinguished from unmodified pulses from enemy aircraft. Later systems used active transponders which included an oscillator which could be tuned to different radar frequencies to transmit amplified, and ultimately encoded signals back to the radar base station thus improving the accuracy and reliability of the system.

The Radio Frequency Identification (RFID) tags introduced in the 1970s are developments of Watson-Watt's IFF responder system.


1935 German physicist Oscar Ernst Heil, working at Berlin University was granted a British patent for "Improvements in or relating to electrical amplifiers and other control arrangements and devices". His design was essentially an insulated gate field effect transistor (IGFET). Using semiconducting materials such as Tellurium, Iodine, Cuprous oxide or Vanadium pentoxide to form a resistor between two terminals, he applied a voltage across the device. By means of a third control terminal he created an electrostatic field across the device at right angles to the current and by varying the voltage on this control terminal he was able to vary the resistance of the semiconductor and thus modulate the current through an external circuit.


Heil's transistor was never developed into a practical product. Semiconducting materials of sufficient purity were not available at the time and in the period leading up to and during World War II the scientific communities of whomever he happened to be working for had other priorities.


Heil however had other interests which benefited from the new focus on research applicable to military applications. He had married Agnessa Arsenjeva a Russian physicist while working in Russia. In 1935, the same year that he was granted the patent for his semiconductor amplifying device, together with his wife he published in Zeitschrift fur Physik, a paper on velocity modulation of electron beams entitled: "A New Method for Producing Short, Undamped Electromagnetic Waves of High Intensity" which outlined the fundamental working principles of the Klystron tube, a high power microwave oscillator, used to provide the transmitter power in the newly developed radar equipment. Leaving Arsenjeva in Russia, he later moved to the UK to continue development work on the klystron with Standard Telephones and Cables (STC), the UK arm of ITT. The day before England went to war with Germany Heil slipped out of the country returning to Germany to continue his work at Standard Electric Lorentz (SEL), ITT's German arm in Berlin. Heil's klystrons, known as "Heil's Generators", became key components in Germany's World War II radars.


The klystron amplifier works by modulating a high energy electron beam, passing between the cathode and the high voltage anode (typically tens of kiloVolts) of a vacuum tube, by passing the beam first through an input cavity resonator excited by a high/microwave frequency (RF) source. (See diagram of the Klystron). The electrons passing through the resonator are either slowed or accelerated depending on the polarity of the RF input signal at the instant the electron is passing through the cavity causing the electrons to form bunches at the input frequency. This bunching is reinforced as the faster electrons catch up with the slower electrons as the beam transits between the cathode and anode thus increasing the intensity or amplitude of the modulation in a process known as velocity modulation. Before hitting the anode, the electron beam passes through a second, output or "catcher", resonant cavity where the RF energy is absorbed by the cavity and coupled out via a coaxial cable or waveguide.

The klystron can also be configured as an oscillator by coupling the signal from the output cavity back to the input cavity, thus providing positive feedback which creates spontaneous oscillations at the resonant frequency of the cavities.


See also the Travelling Wave Tube (TWT)


After the war, Heil's name appeared on an FBI list of Germans accused of war crimes. He was brought to the US by the military and worked at Wright Patterson Air Force Base. Subsequently he formed his own company and carried out intensive research the physiology of the human ear and sound generation by small animals which he applied to the design of sound transducers. His 1973 patent for the Heil Air Motion Transformer (AMT) made him well known to HiFi buffs.


In 1937, American electrical engineers, brothers Russel and Sigurd Varian, also developed a klystron tube based on principles outlined in 1935 by the Heils, but they did not publish their work until 1939. They went on to found Varian Associates in 1948 specialising in microwave components and were the first to move into Stanford Industrial Park, the birthplace of Silicon Valley.


1930's Introduction of Ampoule batteries for use as military fuses.


1936 Carlton Ellis of DuPont was awarded a patent for polyester resin which can be combined with fibreglass to produce a high strength composite materials.

The curing and manufacturing processes for polyester resin were further improved and refined by the Germans. During World War II British intelligence agents stole secrets for the resin processes from Germany and turned them over to American firms. American Cyanamid produced the direct forerunner of today's polyester resin in 1942.


1937 The birth of digital technology. American mathematician Claude Elwood Shannon was the one of the first to realise the similarity between electric switching circuits, Boolean logic and binary arithmetic. He published the proof in his 1937 MIT master's thesis, A Symbolic Analysis of Relay and Switching Circuits and was the first to use these principles as a basis for information processing by using electromechanical relays to build logic circuits which were used in Vannevar Bush's differential analyser. See also Zuse who developed these ideas independently. Shannon's work on digital technology formed a vital strand to his later work on Information theory.


Shannon, like Zuse, showed that logic devices which are commonly called gates may be implemented with mechanical switches, relays or valves (now transistors).

A computer can perform almost any logic or arithmetic operation using combinations of only three types of gates, called AND, OR, and NOT gates. If an "input" or an "output" is defined as a logic "1" and the absence of an input or output as a logic "0" then:

  • AND gates give an output only if all the inputs to the gate are present.
  • OR gates give an output if any of the inputs to the gate are present.
  • NOT gates give an output if no input to the gate is present. A gate used for this function is also called an inverter.

See also Boolean Logic and Digital Circuits.


1937 Eccentric English engineer and visionary Alec Harley Reeves working at ITT in France invented pulse code modulation (PCM) to minimise the effect of noise on transmission systems. Although his system was used for top secret communications during World War II, it needed many more components than conventional analogue circuits and it was not until the availability of integrated circuits that the large scale deployment of digital PCM systems became economically viable.


Electrical noise can be a serious problem with all communications circuits. As a signal progresses down a communications channel it gets weaker and at the same time picks up electrical noise. Each time the signal is amplified to restore its level, the noise is amplified with the signal until the signal may eventually be swamped by the noise. Digital circuits avoid this problem by using a transmitter which samples the analogue signal at high speed (See Shannon above) and converts the amplitude of the signal into a series of pulses, coded so that the pattern of the pulses represents the amplitude of the signal. This process is known as quantising and may be used to derive a simple binary number or some more complex encrypted data code. Noise affects the pulsed or digital signal in exactly the same way distorting the signal, however weak signals are not amplified to restore the signal strength. Instead, using a technique first employed by Henry in 1831, the distorted or noisy pulses are simply used to trigger a new set of clean, high level pulses to replace the weak and dirty signal pulses. The original pulsed waveform is thus regenerated and the noise is left behind. At the receiving end the original analogue signal is reconstituted from the pulses. Because of their noise immunity and amenability to multiplexing and computer controlled data manipulation, digital circuits based on Reeves' work have now almost completely replaced analogue circuits even for the simplest of functions. Standard integrated circuits are available to carry out the analogue to digital (A to D) and digital to analogue (D to A) conversions.


Although a pacifist, Reeves developed a pinpoint bombing system during the war "to minimise civilian casualties". He worked on radar systems, multipexers, fibre optics and acoustic components and was awarded over 100 patents. He also experimented with the paranormal using Geiger counters, pendulums, and electronics in his research and believed he was in regular contact with the long dead Michael Faraday. He claimed to have played in the French Open tennis championships - which were indeed 'open' to anyone who wished to participate. Reeves dedicated his private life to community projects, helping others, encouraging youth and rehabilitating prisoners.


1937 English engineer Robert J. Dippy working in Watson-Watt's radar team at the UK Telecommunications Research Establishment (TRE) conceived the radio navigation system using coordinated transmissions from three or more radio stations to pinpoint the location of a receiver. It relies on the fact that all the points where the time difference between radio signals from two different stations is constant form a hyperbola. The distance of the receiver from the transmitters (the locus of the hyperbola) can be calculated from the time differences from each transmitter. Signals from a second pair of stations determine another set of hyperbolas. The exact position of the receiver is determined by finding the point on the map where the two hyperbolas intersect. Dippy received a patent in 1942 for this invention which was implemented in the Gee navigation system used by the British Bomber Command in World War II. (The name "Gee" or "G" is short for Grid). Dippy's principle of using intersecting radio beams was subsequently used in the LORAN navigation network and is used in the modern GPS (Global Positioning by Satellite) system in which the transmitters are located in orbiting satellites rather than in fixed ground based stations. Like computers, the early navigation systems were large and heavy and housed in equipment racks. Modern navigation receivers are hand held and battery powered.

After working as advisor on the development of LORAN in the USA, Dippy became a Divisional head of research in Australia's Defence Science and Technology Organisation.


1937 Printed circuits were demonstrated by London born British engineer with Hungarian parents John Adolphe Szabadi. In 1938 Szabadi changed his name to John Sargrove by which he is better known since Adolphe wasn't the most popular name in Britain at the time. His circuits were more like thick film integrated circuits than the printed circuit boards (PCBs) we know today. The system did not use etching as with modern PCBs. Instead, with the Sargrove method was an additive, process in which, not just the interconnecting circuit tracks but also the resistors, inductors, capacitors and other components were formed by spraying on to a pre-moulded bakelite panel.


1938 American engineer Hendrick Wade Bode building on Nyquist's work at Bell Labs employs magnitude and phase frequency response plots of a complex function to analyse closed-loop stability in electronic systems. This formed the basis of classical control theory used in the design of stable electronic and other control systems.


1938 Twenty year old Canadian radio enthusiast living in the USA, Al Gross invented and patented the Walkiie-Talkie two-way mobile radio which was picked up by the U.S. military and widely used during World War II (1941-1945 in the USA). In 1948, he pioneered Citizens' Band (CB) radio and was the first to receive a license to produce two-way radios for personal use. In 1949, he invented the telephone pager, a personal radio messaging device which could be contacted via the telephone network. Aimed as a tool to enable organisations such as hospitals to contact key staff, market acceptance was initially slow since users felt that it was an intrusion on their personal time.

During the 1950she proposed to Bell Telephone, a combination and extention of his pager and Walkie-Talkie concepts to create a two way radio telephone system, but Bell's department apparently could not see why anyone would ever need a mobile telephone. In 1958 he did however receive an FCC license himself for the system, but without the backing of a network operator to install and operate it, there was no market. Bell eventually produced their own Cellular Phone System in 1971.


A engineer ahead of his time, the value of each of Gross's revolutionary ideas was recognised only belatedly and commercial benefits only began to accrue around the time that his patents were expiring so that he never reaped the full potential rewards for his innovations.


1938 On December 17, German chemists Otto Hahn and his student and assistant Fritz Strassmann at the Kaiser Wilhelm Institute for Chemistry (KWI) in Berlin, bombarded Uranium atoms with neutrons expecting to create heavier Uranium isotopes or to transmute them into heavier radioactive elements, the so-called transuranium elements. They were surprised and puzzled however to discover that Barium, which is only about half the mass of Uranium, was found in the product of their experiment. They had however overlooked the simultaneous creation of a second by-product of the reaction, the colourless and odourless noble gas Krypton. Conventional wisdom at the time was that the neutron projectile may be absorbed into the nucleus, or it may knock out a few nucleons from it, but it did not have sufficient energy to dislodge so many nucleons out of the nucleus.

Hahn wrote to Austrian physicist Lise Meitner, his friend and collaborator, now a refugee from Germany and working in Sweden, seeking a solution to the puzzle. At the time, Meitner's nephew, physicist Otto Frisch, also banished frrom Germany, who worked at Niels Bohr's Institute in Copenhagen was visiting her in Stockholm and in discussions the pair realised that this was an example of the splitting of the atom. In comparing the process to biological cell division, Frisch coined the term "nuclear fission". They recognised its importance and provided the first theoretical explanation of the underlying mechanisms of the reaction and calculated the immense energy it could release.


Meitner received Hahn's letter on 21st December requesting comments about the presence of Barium after Uranium had been bombarded by neutrons, and she responded immediately confirming the result was odd but not impossible.

By coincidence her young nephew Frisch, arrived from Copenhagen on a familly visit to his aunt two days later. Together they used the opportunity to work out a more detailed explanation of Hahn's puzzling result using Gamow's theoretical liquid drop model of the atomic nucleus. This implied that very large nuclei like Uranium's would be unstable and that the impact of a single neutron could cause it to break up. On the 3rd January 1939 they confirmed to Hahn that the result was an example of nuclear fission and that the reaction had the potential to release massive amounts of energy.

Frisch returned to Copenhagen on the 1st January 1939 and over the next two weeks, communicating with Meitner by telephone, they worked out the reaction in more detail and calculated the energy release.

They recognised the reaction was due to the fission of the Uranium atom and simply by subtracting Barium's atomic number from the atomic number of Uranium, they determined that the second product of the fission would be Krypton. Though the presence of the gas had been missed by Hahn, it was confirmed in later experiments to be present with the Barium.


They described the reaction as follows:

  • The fission is initiated when a neutron is absorbed by the fissile Uranium-235 isotope creating an unstable Uranium-236 isotope.
  • 1n0 + 235U92 236U92

  • The unstable Uranium-236 then breaks apart by fission, creating two new elements, Barium-141 and Krypton-92 and releasing about 200MeV of energy from the fission of a single Uranium-235 atom, plus three more neutrons which could possibly go on to create further fissions with neighbouring Uranium-235 atoms thus resulting in a possible chain reaction.
  • 236U92 141Ba56 + 92Kr36 + 3[1n0]


See also a diagram showing the energy release from nuclear fission.


Using Aston's empirical data about the "mass defect" between an element's atomic mass and the mass of its constituent protons and neutrons, Meitner determined that the mass difference for their Uranium-235 sample and its fission products was about 20% of the mass of a proton. In other words one fifth of a proton's mass remained unaccounted for in the fragments resulting from a collision of a single neutron with a single Uranium atom. Using Einstein's equivalence of mass and energy, E = mc2, she calculated that the missing mass had been converted into about 200 million electron volts (MeV) of energy (or 3.2 X 10-11 Joules).

Meitner wrote to Hahn on the 3rd January confirming their "wonderful findings" that they had actually achieved fission and had split the Uranium atom.


On his return to Copenhagen with the news about the solution to Hahn's "Uranium puzzle", Frisch discussed it with Bohr, who had similarly been puzzled. Not surprisingly, he immediately understood Frisch and exclaimed, "Oh, what idiots we have all been!".

Back in his laboratory on 13th January Frisch repeated Hahn's fission experiment in an ionisation chamber and measured the pulses of ionisation produced by the fission fragments. This enabled him to confirm that fission had occurred and that the energy released by the reaction amounted to 200 MeV per atom.


To put this into context:

  • One gram ( 0.035 ounces) of Uranium-235 contains about 2.563 X 1021atoms (See Avogadro).
  • This means that the fission of one gram of Uranium-235 will release 512.6 X 1021MeV of energy which is equivalent to 82.12 gigaJoules or 22.8 MegaWatthours

  • By comparison, the most energetic chemical reactions liberate about 5 eV per atom, while the detonation of dynamite releases only about 10 eV per molecule.
  • Thus a nuclear fission reaction liberates about 40 million times more energy than a typical chemical reaction and the energy of a nuclear explosion based on fission would be 20 million times more powerful than a conventional explosion using the same amount of material. One kilogram of Uranium would have the same explosive power as 20 million kilograms (or 20 thousand tons) of conventional explosive.


On 22 December 1938, the day after they had sought Meitner's help, Hahn and Strassmann submitted a draft paper to Germany's Naturwissenschaften describing the unusual results of their experiment. By the time it appeared in print on 6 January 1939, the puzzle of fission had been resolved and their paper had been updated several times. It was published in their name alone and described their discovery of "Uranium fission" but without the full explanation of the reaction involved. Meitner's work on the physics of the reaction was not credited, nor even mentioned, in the report since Hahn feared the result would be rejected if it were known to be tainted by "Jewish science", - female Jewish science at that *!. Frisch's contribution was likewise not acknowledged.


To set the record straight, at Niels Bohr's suggestion, Meitner and Frisch wrote their own report with a full description of their findings including a calculation of the energy released in a paper entitled Disintegration of Uranium by Neutrons: a New Type of Nuclear Reaction and mailed it to Nature on 16 January and it was published on Februray 11.


Meanwhile, Bohr confirmed the validity of the findings himself while sailing to New York, arriving on January 16. Ten days later on 26 January 1939 at a conference on theoretical physics at the George Washington University in the USA, 16 days before Meitner and Frisch's paper was published, Bohr, accompanied by Enrico Fermi, addressed an audience of many of the world's top physicists including European émigré scientists who had escaped from Nazi persecution. He described the latest scientific developments from Europe and they were stunned when he informed them that a super bomb might be possible using nuclear fission.

Now the momentous news was in the public domain.


The implications of this news were quickly recognised by Leo Szilárd who had previously speculated about the possibilities of chain reactions. Not only did fission release immense amounts of energy, but the neutrons released by the fission process can go on to split further atoms thus releasing even more neutrons magnifying the energy release to unimagined levels. Such a reaction could be used to make a very powerful bomb like nothing seen before.


The publication of these two papers and Bohr's announcement about the newly discovered fission, caused a sensation in the physics world. Up to that point nuclear physics had been mainly an academic pursuit and its focus had been on unravelling the curiosities of the science. The news about the posssibility of nuclear fission and its potential for releasing immense amounts of energy suddenly shifted the focus to finding ways of capturing and using this awesome power. It marked the start of a new era as researchers stepped up their efforts into creating practical applications, particularly for nuclear power generation but especially for the manufacture of nuclear weapons. Many new investigations were started and papers published over the next few months while the threat of a war in Europe added urgency to the quest.


On 22 April 1939, ignoring the pleadings of Szilárd who wanted to keep this new technology out of German hands, Frédéric Joliot-Curie's team (Russian Lew Kowarski and Austrian Hans von Halban), working at the Collège de France in Paris, were the first to publish results confirming that fission produced enough secondary neutrons and had the potential to start a self-sustaining nuclear chain reaction, a fundamental requirement for a nuclear weapon. (See Notes below)


The next month, von Halban and Kowarski suggested that heavy water could be used as a neutron moderator in a nuclear reactor fuelled by natural Uranium. They carried out experiments on Uranium using ordinary water as a moderator, but found that the Hydrogen atoms absorbed neutrons, thus preventing the desired chain reaction. Heavy water however, with a surfeit of neutrons was shown to be an ideal moderator.

This was a great step forward in enabling the applications of nuclear energy, avoiding the need for enriched Uranium fuel. Together with Joliot-Curie, they secretly registered three patents with the French "Caisse Nationale de la Recherche Scientifique", two of which concerned energy production and the third referred to an atomic bomb.


The following month, French mathematician and physicist Francis Perrin, also from Joliot-Curie's team, was the first to conceive of, and to publish, a theoretical approximation of the critical mass of Uranium required to produce a self-sustaining release of energy.

Based on parameters known at the time, Perrin calculated the critical mass of Uranium oxide to be a ball about 3 metres (10 feet) in diameter with a mass of about forty tons.


Notes

  • Chain Reaction
  • Fission may be initiated by bombarding the radioactive nuclear fuel with neutrons from an external source, or from neutrons emitted by radioactive atoms in the fuel itself. However for a self-sustaining chain reaction to take place, the rate at which neutrons are created must be greater than the rate at which they are consumed in other fissions or absorbed in non-fission reactions with other atoms and molecules or emitted from the system. If the number of fissile atoms is small as in a low mass fuel charge, or if they are widely dispersed, then most of the neutrons released by the initial fission will not encounter more fissile atoms and the reaction will die out.

    The condition for a chain reaction is usually expressed in terms of a neutron multiplication factor, k, which is defined as the ratio of the number of fissions produced in one step (or neutron generation) in the chain, to the number of fissions in the preceding generation. If k is less than unity, a chain reaction cannot be sustained. If k = 1, a steady-state chain reaction can be maintained; and if k is greater than 1, the number of fissions increases at each step, resulting in a divergent or exponential chain reaction.


  • Critical Mass
  • When the rate of neutron production is equal to the rate of neutron losses, including both neutron absorption and neutron leakage, k = 1 and the condition is known as criticality. It is the point at which the chain reaction just becomes self-sustaining and the corresponding fuel mass at this point is called the critical mass. Below this mass k < 1 , the reaction is said to be subcritical and it dies out. Above it, k > 1, the reaction is said to be supercritical and the rate of the reaction increases. The effective critical mass depends on many factors, including the degree of enrichment of the fuel, its density, temperature, shape, and whether it is confined within a neutron-reflective container.


See how these theories were applied in the design of the experimental Chicago Pile CP-1, the world's first nuclear reactor.


Hahn alone was awarded the 1944 Nobel Prize for Chemistry for "his discovery" of the fission of heavy atomic nuclei.


Lise Meitner was one of the most important nuclear physicists of the twentieth century, but despite a record 48 nominations for the Nobel Prize by leading scientists such as Max Planck, Niels Bohr, Werner Heisenberg, Arthur Compton and Max Born who all nominated her multiple times and the self-seeking Hahn who nominated her once, she was never awarded this honour.

She did however achieve immortality in 1982 when a newly discovered element in the periodic table was named Meitnerium in her honour. Unfortunately she had died in 1968, the same year as Hahn, and was unaware of this accolade.


Meitner had first met Hahn in 1907 when she was working as an assistant to Max Planck at the University of Berlin and for over 30 years they had worked together on radioactivity and nuclear physics, discovering several new isotopes and exploring the neutron bombardment of Uranium and other elements. It was Hahn's job to create the experiments and to separate and process the radioactive materials while Meitner's job was to explain the nuclear processes involved in the reactions.

From 1912 they worked together at KWI where she published independently, fifty-six papers between 1921 and 1934. They were joined by Strassmann in 1929, and from 1934, they were particularly involved in identifying the products of neutron bombardment of Uranium and other elements and their decay patterns. In that year Fermi had announced the results of his transmutation experiments in which he had used neutrons to bombard various elements to create heavier elements, some of which were previously unknown to science, and Hahn's team expected similar results. This explains Hahn and Strassmann's surprise at discovering the lighter Barium from their experiment in 1938. This is despite the fact that fission had first been first been demonstrated in 1932 by Cockroft and Walton who split the lighter Lithium atom into two alpha particles, though its far reaching importance had not been recognised at the time. Hahn's puzzling discovery took place without the presence of Meitner who, having Jewish ancestry, had been forced to flee from Germany five months earlier.


In 1938, Germany had invaded and annexed Austria in the "Anschluss" and brought Austrians under the Third Reich’s anti-Semitic laws which, amongst other things, persecuted Jewish academics and forbade famous scientists from travelling abroad. As a consequence, Meitner's highly focussed research at KWI came to an end. Her research funding was withdrawn since she had been born into a Jewish family, even though she had been a baptised Christian since 1908. She also feared for her life. In this situation she was encouraged by Niels Bohr to leave Germany and he arranged for her to take up a research position in Sweden at the Nobel Institute for Physics in Stockholm. On the 13th July, at the age of 59, with the support of Otto Hahn and the help from the Dutch physicists Dirk Coster and Adriaan Fokker she escaped via the Netherlands without a passport and with only two small suitcases and 10 deutschmarks in her purse leaving behind all her personal posessions and papers. Hahn had however given her a diamond ring he had inherited from his mother, to be used to bribe the frontier guards if required, but it was not needed.


Meitner lived in an age when many women were still excluded from science and their contributions were undervalued. Despite this she published many important scientific papers in her lifetime. But life was not always easy. Even when she arrived at Sweden's Nobel institute, at the height of her creativity, she had a pitiful salary and no longer had access to research facilities like she had at the KWI in Berlin but she persevered and during this period kept in contact with Hahn by mail. Surprisingly, after missing out on recognition for her most important work on fission, she remained on friendly terms with Hahn for another 30 years.

Though Meitner was a key contributor to the team that discovered and explained nuclear fission and foresaw its explosive potential, she was a pacifist and refused an offer to work on the Manhattan Project at Los Alamos. She also turned down the invitation to attend the Trinity test of the first nuclear explosion. She did however continue her research and lecturing activities into the 1960s.


1938 Contrary to popular belief, non-stick Teflon was not a product of NASA's space program. It was discovered by accidentally in 1938 by DuPont chemist Dr. Roy J. Plunkett while investigating possible new refrigerants. His lab technician Jack Rebok found an apparently defective cylinder of tetrafluoroethylene gas. Although it was the same weight as full cylinders, no gas emerged when the valve was opened. Rebok suggested sawing it open to investigate and inside, Plunkett discovered that a frozen, compressed sample of tetrafluoroethylene gas had polymerised spontaneously into a white, waxy solid to form poly tetrafluoroethylene (PTFE).

PTFE has a high melting point, is inert to virtually all chemicals and is considered the most slippery material in existence. Now used as extensively an insulator or separator in a wide variety of batteries an other electrical equipment, it remained a military secret until after the end of World War II.

Another secret? - How do they get Teflon to stick to the cookware?


1938 65% of British homes wired for electricity.


1938 German born American engineer Joseph G. Sola invented the Constant Voltage Transformer (CVT). Based on ferroresonant principles it has a capacitor connected across the secondary winding. The voltage on the secondary winding increases as the input voltage increases, however the corresponding increasing flux produces an increase in the leakage reactance of the secondary winding and this approaches a value which resonates with the capacitor connected across it. This causes an increased current which saturates the magnetic circuit thus limiting any further rises in output voltage due to increased input voltage. The output may not be a pure sine wave but usable outputs can be obtained with a swing of +/- 25% in the input voltage. Furthermore, the transformer will absorb short duration spikes and due to the energy storage in the resonant circuit the output will hold up for short power interruptions of half a cycle (10 milliseconds) or more, making it useful for UPS applications.


1938 Swiss born German physicist Walter H. Schottky explained the rectifying behaviour of a metal-semiconductor contact as dependent on a barrier layer at the surface of contact between the two materials which led to the development of practical Schottky diodes. He had been one of the first to point out the existence of electron "holes" in the valence-band structure of semiconductors.


During his lifetime Schottky contributed many theories, designs and inventions including the superheterodyne radio, the tetrode valve and the ribbon microphone which transformed the electronics industry.


1938 German civil engineer Konrad Zuse completed the world's first programmable digital computer, an electromechanical machine, which he called the Z1. Started in 1936, it was built in his parents' apartment and financed completely from his own private funds. It pioneered the use of binary arithmetic and contained almost all of the functions of a modern computer including control unit, memory, micro sequences and floating point arithmetic. Programs were input using holes punched into discarded 35-millimetre movie film rather than paper tape and data was input through a simple four decimal place keyboard. The calculation results were displayed on a panel of light-bulbs. The clock frequency was around one Hertz. Relays can be used to store data since the position of the contacts, closed or open, can be used to represent a one or a zero, but Zuse did not use this solution because relays were very expensive. Instead he devised a mechanical memory system for storing 16 X 22-bit binary numbers in which each memory cell could be addressed by the punched tape or film. For storing data it used small pins which could slide in slots in movable steel plates mounted between sheets of glass which held them together. The pins could move and connect the plates and their position at either end of the slot was used to store the value 0 or 1. Individual memory units could be stacked on top of one another in a system of layers. In keeping with the German tradition of solid engineering Zuse claimed "These machines had the advantage of being made almost entirely of steel, which made them suitable for mass production".

Zuse was called up for military service in 1939 but was later released from active service, not to work on computers as might be expected, but to work as an aircraft engineer. He continued the development of his ideas in his spare time and, despite the shortages of materials, in 1941 he demonstrated his third machine imaginatively called the Z3. With limited backing from the DVL, the German Aeronautical Research Institute, this time he was able to use 2,600 relays, which were more reliable than his metal plates, to form the memory registers and the arithmetic unit. The memory capacity was increased to 64 words and the clock frequency was increased to 5.33 Hertz. The Z3 is the undisputed, first fully programmable practical working electronic digital computer. It was programmed using punched tape but because of size limitations of the memory, the Z3 did not store the program in the memory. Otherwise it used the basic architecture, patented by Zuse in 1936, and all the components of a modern computer. Credit for defining this concept was later incorrectly attributed to Hungarian born American mathematician John von Neumann.


(In fact the genesis of the so called "von Neumann architecture" arose from the First Draft of a Report on the EDVAC, Eckert and Mauchly's second generation computer which incorporated the lessons learned, and the insights gained, from their experience with their earlier, pioneering ENIAC computer. The "draft" progress report about the development of EDVAC was written in 1945 by von Neumann in which he summarised the ideas of the EDVAC design team. Von Neumann had joined the project as it neared completion and the report was published under his name only and circulated by his colleague and fellow mathematician Herman Goldstine much to the annoyance of Eckert and Mauchly and other team members who pointed out that many of the ideas predated Von Neumann's involvement in the project.)

See a diagram and description of the von Neumann Architecture

See also Turing's Universal Machine proposed in 1936.


Zuse had been helped during the construction of the Z1 machine by fellow engineer and inventor Helmut Schreyer who later suggested to Zuse that he should replace the relays in his computers by electronic valves which were over 1000 times faster. Zuse liked the idea and ran with it.

After the success of the Z3, in 1941 the German government at last took notice of Zuse's work but when he proposed a faster computer based on electronic valves, it was rejected on the grounds that the Germans were so close to winning the War that further research effort would take too long and was therefore not necessary. (Hitler expected the War to be over in two years and so had banned long term projects.)


In the early aftermath of the war West Germany was prohibited from developing electronic equipment, materials were even scarcer than before and electrical power was only available intermittently. His latest computer the Z4 had also been damaged in the Berlin air raids but Zuse had managed to rescue it and after many difficulties he was eventually able to restart its development in Switzerland. Completed in 1950, the Z4 was the first computer in the world to be sold to a commercial customer, beating the Ferranti Mark I in the UK by five months and the UNIVAC I in the USA by ten months.


Between 1942 and 1946 Zuse also developed Plankalkül (German, "Plan Calculus") the world's first high level programming language but did not publish at the time. It included assignment statements, subroutines, conditional statements, iteration, floating point arithmetic, arrays, hierarchical record structures, assertions, exception handling, and other advanced features such as goal-directed execution. Intended as an engineering tool for performing calculations on structures, Zuse also used Plankalkül to write a program for playing chess. At that time the concept of software was unheard and surprisingly he did not start with machine oriented assembly language programming but immediately set out to develop the more complex user oriented language. Plankalkül was the first modern programming language at any level above manual toggle switching or raw machine code. It was finally published in 1972 and the first compiler for it was implemented in 2000 by the Free University of Berlin, five years after Zuse's death.


Until 1950 Zuse lived in complete isolation from the world outside Germany, particularly during the war years, when he remained in Berlin where his first three computers and his workshop were destroyed by allied bombing raids. He had little knowledge of computer developments elsewhere and his work was likewise almost unknown outside of Germany, although IBM obtained an option on his patents in 1946. He was not successful as a businessman and his company was sold to Siemens in 1967. Besides his engineering talents Zuse was also an accomplished artist who sold his paintings during his early years to fund his studies and at the end of the war sold woodcuts to American troops in order to buy food. In retirement he returned to painting as a hobby.


There have been many claimants to the title of The First Computer. For the record, here are the dates when some other early programmable computers became fully operational:

  • 1941 - Zuse Z3 (Germany) A programmable, electromechanical calculating machine (See above).
  • 1942 - ABC (Unfinished) (USA) The Atanasoff-Berry Computer, built by John Vincent Atanasoff and his graduate student Clifford Berry at Iowa State University. It used 311 vacuum tubes (valves) to perform binary arithmetic but it was not a stored program machine nor was it fully programmable but program changes could be input using switches. It was abandoned before it was completed when Atanasoff left to do military service. At the time, neither Atanasoff nor the Iowa University thought it necessary to patent any of the innovative concepts used in the ABC.
  • 1943 - Colossus (UK) built by Post Office engineer Thomas (Tommy) Harold Flowers, and mathematicians Maxwell (Max) Herman Alexander Newman, William Thomas (Bill) Tutte and Alan Mathison Turing at the the UK governmment's highly secret code-breaking centre at Bletchley Park. It was the first all-electronic calculating machine.
  • Colossus was used during WWII to break the ultra high security codes generated by Germany's Lorenz cipher machines. These machines, used by the German High Command, had 12 rotors providing more encryption stages than the Enigma cipher machines, in use by the German military forces, which had only three. These nine extra stages gave the Lorenz machines theoretically a total of 2501 (approximately 10151) possible ways of encrypting each symbol, an astronomically large number, much larger than the already large 159 X 1018 possibilities of the standard Enigma machine.

    The Colossus machine was the first to work on symbols and logical operators, not just numbers and arithmetic, and used 1,500 valves (vacuum tubes) to perform Boolean operations. Turing, who had worked with Gordon Welchman on the design of the electromechanical Bombe which was used to crack the Enigma code, contributed to the Colossus design by using probability theory to guess the more likely rotor settings of the coding machine from patterns in the received data to potentially reduce computer's data processing load.

    Tutte however was the main contributor to breaking the Lorenz code. He succeeded in deducing the Lorenz machine's multi-level logical structure from a single intercept of the code it produced without ever having seen a machine, an achievement regarded by many as the greatest intellectual achievement of World War II.

    The actual Colossus machine was designed and built by Flowers who took Tutte's logical concept diagrams and converted them into electronic circuits to carry out the same mathematical functions. The machine was programmed using switches and cables in a patch panel which needed rewiring to implement program changes. Data was entered using punched tape. Ten Colossi were built and used in great secrecy and no attempt was ever made to commercialise them. At the end of the war Winston Churchill ordered eight of them to be smashed "into pieces no bigger than a man's hand" and all the drawings to be burned. The two remaining machines were sent to GCHQ the UK government's top secret communications centre. It was not until 1970 that existence of the Colossus was revealed publicly as the result of the USA's Freedom of Information Act. (The US government had been given details of Colossus during the war as part payment for US assistance to the UK's war effort.)

  • 1944 - Harvard Mark 1 (AKA IBM ASCC) (USA) Built by IBM's Howard Aiken. An automatic digital sequence-controlled computer, based on relays and rotary switches. It used decimal arithmetic and programs were entered using punched tape.
  • 1945 ACE - Automatic Computing Engine The world's first stored-program electronic computer system was designed by Enigma code breaker Alan Turing then working at the UK's National Physical Lab (NPL), however it was not until 1950 that a fully functioning model was made. (See more below).
  • In 1936, working at Cambridge University, Turing had conceived the principles and architecture for a "universal" calculating machine which is now referred to as the Turing Machine. This hypothetical machine could be used to simulate any algorithmic computation and predates the inappropriately claimed von Neumann architecture by nine years. It was implemented using currently available technologies and consisted of a limitless memory stored on an endless paper tape and a scanner moving forwards and backwards on the tape reading what was printed there and in turn printing further letters and numbers on the tape. The machine's program and whatever data it needed for the computation were printed on the tape before the computation was started. By placing different programs on the memory tape, the operator could make the machine carry out any procedure that a human computer could carry out. To this day, all stored-programme digital computers are modelled on this invention. It was the first to use the modern concept of software. Unfortunately, immediately after the war there was no longer any government imperative to manufacure such a complex machine and likewise there was no pressing commercial demand and hence development funds were severely limited. However Turing's ideas for the ACE were picked up by Bletchley Park alumni amongst others and used in the development of the Manchester Mark 1 and the Ferranti Mark 1 commercial computers. (See below).

  • 1946 - ENIAC (USA) Electrical Numerical Integrator and Calculator, built by John Presper Eckert and his student John W. Mauchly at the University of Pennsylvania. It used 18,000 vacuum tubes and consumed almost 200 kilowatts of electrical power. It was a single purpose machine designed to plot missile trajectories. Funding for the project was secured by Herman Heine Goldstine, an ordnance mathematician at the U.S. Ballistic Research Laboratory, who teamed up with Mauchly. Calculations used decimal rather than binary arithmetic and it was not a stored program machine. Programs were ente uch slower than Turing's stored-program ACE machine. ENIAC was the forerunner of the UNIVAC (Universal Automatic Computer) machine launched by Remington Rand in 1951 after they had purchased Eckert and Mauchly's company. The ENIAC used design concepts Mauchly had copied from Atanasoff's ABC machine for which Atanasoff received neither credit nor recognition. In 1973 when Sperry Rand tried to extract royalties for the use of its ENIAC computer patent they were challenged in court by Honeywell and the court voided Sperry Rand's patent declaring it to be a derivative of Atanasoff's inventions.
  • 1948 - Manchester Mark 1 (AKA "Baby") (UK) Built by Max H.A. Newman, who had worked on Tutte and Flowers' Colossus machine, and Freddie Calland Williams with software written by Tom Kilburn. The first computer with a true stored-program capability and the arguable von Neumann architecture, it used the persistence of the image on the phosphor screen of a cathode ray tube (CRT) for data storage and binary arithmetic for processing. With a clock speed of 1 MMz, it was the fastest computer in the world at the time.
  • 1951 - Ferranti Mark 1, derived from the Manchester Baby was the basis for one of the first commercially available digital computers. Its programming system was designed by Turing.
  • 1949 - EDSAC (UK) Electronic Delay Storage Automatic Computer, built by Maurice V. Wilkes at Cambridge, was the first to use a mercury acoustic delay line for data storage. It was a true general purpose stored-program machine using binary arithmetic. Not to be confused with the Eckert and Mauchly's EDVAC (Electronic Discrete Variable Automatic Computer), EDSAC was conceived as a research machine and did not become fully operational until 1952.
  • 1953 - LEO (UK) Lyons Electronic Office, the world's first computer to be used for commercial business applications, was derived from the EDSAC machine and developed by J.Lyons and Company, a British catering firm, with support of EDSAC's Maurice Wilkes.
  • 1951 - After three years of development, Whirlwind, the first computer with a "real-time" operating system went live at MIT Servomechanisms Laboratory. Built by Jay Forrester and Robert Everett, its intended application was as a flight simulator for bomber crews. Previous computers were dedicated to single tasks, and run in batch mode and produced a printed output. This was not responsive or fast enough for the flight simulator application which was required to accept continually varying control inputs from the pilots and to produce an immediate and continuous display of the status of the aircraft's systems and its aerodynamic conditions on an instrument panel.
  • The design used 5000 valves (vacuum tubes) and major innovations included the first use of bit-parallel processing, in place of bit-serial mode, to speed up processing and ferrite core memory to replace unreliable mercury delay lines and cathode ray tube storage. A later development to update the design with transistors replacing the valves was undertaken by Ken Olsen who eventually left the project to found the Digital Equipment Corporation (DEC) which produced the first generation of minicomputers.

  • 1952 - Eckert and Mauchly's second generation machine the EDVAC Electronic Discrete Variable Automatic Computer became fully operational. Their earlier ENIAC computer had been designed with the prime purpose of calculating artillery firing trajectory tables for the US Army. As its development progressed, the pair had identified numerous opportunities for improvements but, because of wartime exigencies, a design freeze was imposed to get ENIAC into service as soon as possible and they were not able to incorporate all of their ideas into the machine. EDVAC provided the opportunity to develop their ideas further. Design started in 1946 with an initial study group which included members of the ENIAC team, Herman Goldstine and another mathematician, Arthur Walter Burks. They were joined later in a consulting role by John von Neumann a mathematician who worked on the Manhattan Project. One of EDVAC's key ideas was that the computer could store programs for different applications in its electronic memory rather than programming the computer for each new application by using mechanical switches and patch cables as in the ENIAC. The EDVAC was also the first to use a Mercury acoustic delay line for data storage.

Echoing Babbage's experience, with four out of the first eight modern computers, UK innovation once more was not translated into commercial success.


Starting with these early projects and helped by the development of semiconductors, microprocessors and Moore's Law, computer systems, software and applications have evolved dramatically, becoming essential tools in improving the global economy across the board. Industry, commerce, security, communications, personal lifestyles and wellbeing are now dependent on, or enhanced by, these technologies.

Surprisingly, this spectacular evolution and growth were not anticipated, not just by Winston Churchill, but also by some of the industry's key players involved in creating it, including Atanasoff and Berry, noted above, as well as others as the following quotations will testify:

Thomas Watson, President of IBM. In 1943 - "I think there is a world market for maybe five computers."

Ken Olsen Pioneer of minicomputers and founder of Digital Equipment Corporation. In 1977 - "There is no reason anyone would want a computer in their home."


In the 1960s people were concerned that automation due to the intoduction of computers would lead to unemployment and commentators speculated on the "problems" of too much leisure time. They were proved wrong. New computer applications were invented and new opportunities were created bringing with them many new jobs.

Today doom-mongers are predicting similar concerns about the future of artificial intelligence (AI)


1939 The German company I.G. Farbenindustrie filed a patent for polyepoxide (epoxy). Benefiting from German technology epoxy resins were made available to the consumer market almost four years later by an American manufacturer. They have very strong adhesive properties being one of the few materials which can make effective joints with metal. They are dimensionally stable and have similar expansion rates to metals. When combined with fibreglass they can produce an extremely strong composite materials, known as Glass Reinforced Epoxy (GRE), strong enough for use as aircraft components.

Because of epoxy's chemical resistance and excellent electrical insulation properties, electrical parts such as batteries, relays, coils, and transformers are insulated with epoxy.

See also polyester resins.


1939 Almost two thirds of British households have electric lighting.


1939 Following speculation by Arthur Eddington, nuclear physicist Hans Bethe, a Jewish refugee from Germany, working in the USA explained in quantitative terms how the energy in the Sun and the stars could be generated by nuclear fusion. It involved a series of fusion reactions in which Hydrogen atoms were first transformed into Hydrogen isotopes which in turn were transformed into Helium with the release of large amounts of energy.


The process is as follows:

The Sun's immense gravitational forces press Hydrogen atoms (protons) closer together until two of them touch, but because of the electrostatic repulsion of their positive charges, the pair becomes unstable and one of the protons undergoes a form of radioactive decay, turning it into a neutron and emitting a positron and a neutrino. This action forms a Deuteron atom (one proton and one neutron), which is more stable than the two repelling protons. This transmutation (beta decay) of protons into neutrons plus beta particles is mediated by the weak nuclear force.

(This is slightly different from the more common beta decay reaction in which the weak force causes the neutron which is slightly heavier than the proton, and hence more unstable, to decay into a proton, an electron and a neutrino.)

The Sun's massive gravitational forces combined with its extremely high temperature provide the necessary conditions for nuclear fusion to take place. Once a Deuteron is formed it will fuse with another free proton to form Helium-3 (one neutron and two protons), releasing tremendous amounts of energy. See the equations and diagram of the solar fusion process. In turn these Helium-3 atoms fuse with even more particles to form ever more complex, heavier nuclei such as the Helium isotope 4He2 releasing two protons and even more energy in further reactions.

Though the Sun's release of energy in these fusion reactions is due to the strong force, it is the weak force which initiates the process.


Ever since then, attempts have been made to duplicate the final stage of this fusion process on Earth in the quest for cheaper, safer nuclear power generation.


Bethe presented his theory in a paper entitled "Energy Production in Stars" which won him the Nobel prize for Physics in 1968.


1939 The Einstein–Szilárd Letter was sent to President Franklin Delano Roosevelt, warning him that recent scientific research had made it probable that a nuclear chain reaction could be set up in a large mass of Uranium releasing vast amounts of power. This in turn could enable the construction of extremely powerful bombs which could threaten the USA. The letter was written by Szilárd with the collaboration of fellow emigré Hungarian physicist Eugene Wigner and signed somewhat reluctantly by the more famous Albert Einstein to gave it added credibility. It was dated August 2, one month before the outbreak of World War II when Germany invaded Poland and was received by Roosevelt on October the 11th, one month after the outbreak of the war.


Alexander Sachs an economist and close confident of Roosevelt added his support by recalling Napoleon Bonaparte's reaction when told of American inventor Robert Fulton's proposition for steam-powered engines to propel his ships. Napoleon is said to have replied: "You would make a ship sail against the winds and currents by lighting a bonfire under her decks? I have no time for such nonsense."


Roosevelt responded by setting up the Uranium Committee on October 21st, with civilian and military representation headed by Lyman James Briggs director of the U.S. National Bureau of Standards, but with little initial funding ($6,000 to purchase Uranium and Graphite for their experiments), progress was slow. It was not until the Japanese attack on Pearl Harbour in December 1941, two years later, that the U.S. decided to commit the necessary resources.


1940 Using mass spectroscopy, American physicists John R. Dunning at Columbia University and his colleague Alfred O.C. Nier at the University of Minnesota demonstrated that when bombarding the two major isotopes of naturally occurring Uranium with slow neutrons, fission is more readily produced in the comparatively rare Uranium-235, present in only 0.7% of the source, rather than in the more abundant Uranium-238 which made up the other 99.3%. (A third, even rarer, isotope Uranium-234 amounts to only about 0.005% of the total). They recognised that using Uranium-235 could make a chain reaction possible but this would require the fissionable Uranium-235 isotope to be separated from the Uranium-238 and concentrated into a critical mass to enable practical high energy applications.

As a result of their work several alternative methods of Uranium-235 enrichment were initiated for both nuclear power and military applications.


1940 The possibility of spontaneous nuclear fission, a form of radioactive decay that is found only in very heavy unstable chemical elements with atoms with atomic numbers above 90 (amu > 230), was first confirmed by Soviet physicists Georgy Flyorov and Konstantin Petrzhak. They carried out their experiment 60 metres (200 ft) underground in Moscow Metro's Dinamo station to shield the test sample from induced fission caused by neutrons arising from stray cosmic rays. Because spontaneous fission only occurs very rarely and usually only at very low rates, it is unlikely that a chain reaction would result today from such events in naturally occuring ores due to the low concentration of the active material in the ore after the 13.8 billions of years of decay since the Big Bang. However Japanese-American nuclear scientist Paul Kazuo Kuroda, speculated in 1956 that ancient Uranium deposits in Western Africa could have contained a high enough concentration of U-235 to provide the conditions necessary for a natural nuclear chain reaction to develop. His view was corroborated in 1972 by evidence discovered in Uranium deposits found in what is now modern day Gabon, showing that a natural spontaneous nuclear reaction could indeed have occurred about two billion years ago.

In modern times, when pure samples of some heavy isotopes with very high spontaneous fission rates such as Plutonium-240 are concentrated into a small volume, a localised fission reaction may be triggered possibly destroyng the sample.


1940 (March) Jewish physicists, Austrian born Otto Robert Frisch and German born Rudolf Peierls, both refugees from the Nazis and working at the UK University of Birmingham, under the direction of Australian physicist Marcus Oliphant, designed the first theoretical mechanism for the detonation of an atomic bomb. It was published in a paper, known as the Frisch-Peierls Memorandum, describing the processes and the materials required to produce an atomic explosion, in a practical sized device, triggered by conventional explosives. Up to that point, researchers in the USA, including Einstein and Szilárd, following Perrin's prediction, had believed that a nuclear bomb would be too heavy to be transported by air. Frisch and Peierls showed that by placing a neutron deflector around the fuel to prevent the escape of neutrons from the fuel mass, eliminating the moderator, (unnecessary in a bomb), and enriching the fuel, such a bomb could be constructed from a metallic sphere of Uranium-235, 4.2 cm (1.66 in) in diameter, weighing only 1 Kg (2.2 pounds) and could in fact be delivered by air.

They also showed that the total mass of Uranium fuel needed for the bomb could be safely transported in two separate parts, each with lower than the critical mass. At an appropriate time, these two parts could be forced together at high speed to assemble the critical mass in a single entity thus triggering the explosion. The memorandum also predicted the destructive power of the explosion to be equivalent to 1000 tons of dynamite, and gave an indication of the resulting radioactive fallout. Finally it estimated that a plant to produce 1 kg of uranium-235 per day would cost £5 million and could be available in as little as two years, but this would require a large skilled labour force which unfortunately was already committed to other critical tasks of the British war effort.


In 1939, Frisch had published with Lise Meitner the explanation of nuclear fission and quantified the energy released and described its potential for a chain reaction. Peierls' prior work mainly involved the use of quantum theory to explain semiconductor behaviour.


1940 (April)The feasibility of an atom bomb caused a stir in the small community of nuclear physicists in the UK. Britain was already at war with Germany and it was feared that Germany may also be aware of such possibilities and could be a serious threat. This led to the new Prime Minister Winston Churchill establishing the MAUD Committee, which was tasked with an all-out effort to develop nuclear weapons and given a deliberately misleading name of "Tube Alloys" to maintain the secrecy of this knowledge. The project was chaired by George Paget Thomson and staffed with eminent physicists including Marcus Oliphant, Patrick Blackett, James Chadwick, John Cockcroft, and Oliphant's assistant Philip Moon. Austrian and German born, Frisch and Peierls were not initially included in the Tube Alloys team since they were officially classified as enemy aliens, though subsequently they both made significant contributions to the UK project and also at Los Alamos as part of the British Mission to the Manhattan Project.

After fifteen months of work, the research culminated in two MAUD reports, "Use of Uranium for a Bomb" and "Use of Uranium as a Source of Power", known colloquially as the "Bomb" and the "Boiler". The reports also included cost estimates and technical specifications for a large Uranium enrichment plant. Unfortunately, at that time, Britain did not have the massive manufacturing or financial resources needed to produce such weapons and even if it did, its production facilities would have been highly vulnerable to German bombers.


The Frisch-Peierls memorandum was therefore passed informally to Lyman Briggs, the Director of the U.S. Uranium Committee with a reminder that nuclear fission had been discovered in Nazi Germany nearly three years earlier. The information was sent as part of the Tizard Mission but also as part of a formal MAUD Report in July 1941 outlining the conclusions of the MAUD committee, confirming the necessity and feasibility of airborne atomic weapons and that they could be available within two years. The initial response from the Americans was less than enthusiastic. Research up to that point had not produced serious or encouraging opportunities for applications of high energy nuclear physics so some key players were somewhat sceptical about the prospects. Furthermore, unlike the UK, the USA was not under threat militarily and was still ostensibly non-aligned.

By August 1941 the MAUD committee were puzzled that they had received virtually no comments about their report and so Marcus Oliphant, an original member of the committee, flew to Washington in the U.S. to find out why the Americans were ignoring the MAUD Committee's findings.


Calling on Briggs, Oliphant discovered that he had put the reports in his safe and had not shown them to members of his committee, nor to any members of America's scientific community. Surprised and distressed, he then arranged to meet all the members of the Uranium Committee as well as other key physicists to enlist their support and goad them action. One of these was his friend Ernest Lawrence at the University of California, Berkeley with whom he had common interests and experience in particle accelerators. In 1932 Lawrence had designed the cyclotron to be used for this purpose, while around the same time Oliphant was using a particle accelerator to investigate nuclear fusion. As an aside to the meeting, Oliphant encouraged Lawrence to adapt his cyclotron, not just for mass measurements, but also for use in separating isotopes.

Oliphant gave a copy of the Frisch-Peierls memorandum to Lawrence who brought in Robert Oppenheimer to check the figures and he convinced them that an atomic bomb was feasible, and that there was a real threat from German posession of the weapons, thus securing their support. With these two influential allies, he was able to obtain top level backing and cooperation. This led to the eventual allocation of massive development and production resources by the U.S. to the weapons programme and was one of the key events leading to the setting up of the Manhattan Project in August 1942 and the UK participation in it.


1940 John Turton Randall and Henry Albert Howard Boot, also working on Oliphant's team at the Nuffield Laboratory Physics Department of Birmingham University, developed the first practical cavity magnetron, a high power microwave transmitter valve (vacuum tube) which was an essential component in wartime Radar transmitters. It could generate over 1000 times more than the peak power of any other existing microwave generator at the time.

British radar developments were the country's most highly secret projects which provided a critical advantage during the aerial warfare of World War II. Even though Frisch and Peierls were allowed to work on the Briitish atomic bomb project they were excluded from working on radar.


Now the magnetron is an essential component in microwave ovens.


The magnetron had been invented by Hull in 1920 but its low power output limited its possible applications. Randall and Boot dramatically improved on this by using resonant cavities to reinforce the oscillations generated by the basic cathode - anode structure of the device. Instead of using a thin walled tube for the anode, the anode is constructed from a large cylindrical anode block of copper.

See a diagram and picture of a Resonant Cavity Magnetron.

The photograph shows the eight cavity magnetron (E1189) taken to the USA in 1940 by the Tizard Mission (See next). It generated a peak power of over 10 kiloWatts at a frequency of 3 GHz from an anode block less than 3 inches (75 mm) in diameter. It was an amazing leap in technology which astounded the Americans. Improved versions manufactured a few months later produced over 100 kilowatts of microwave power. The anodes used in the prototypes had six cavities which were machined using the chamber of a Colt revolver as drilling template since it was about the same size.

How it Works - A cylindrical hole bored through through the centre of the anode block forms the main interaction space between the electrodes. Spaced equally around this central chamber, a number of cylindrical cavities are bored into the block, parallel to the main chamber. A narrow slot along the length of each of these cavities connects them to the central chamber. At the critical magnetic field, the electrons sweep past these apertures inducing a resonant, high-frequency radio field in the cavity, which in turn causes the passing electrons to bunch into groups. The bunches are reinforced as the electrons circulate around the central chamber passing each resonant cavity in turn in a similar way to the velocity modulation on which the travelling wave tube (TWT) and the klystron depend. A portion of the field is extracted with a short antenna protruding into one of the resonant cavities and connected to a waveguide or coaxial cable and fed to the RF load.


See more about The Magnetron, Loomis and the Foundation of the MIT Radiation Lab.


1940 During the UK's darkest days in the World War II, the British government sent a Technical and Scientific Mission, led by chemist Henry Tizard, to the USA, which was still neutral at the time, to seek cooperation and resources desperately needed to develop UK military technology as well as access to US technology. The UK contributed details of Randall and Boot's cavity magnetron - (See Loomis), Frank Whittle's jet engine, Robert Dippy's Gee radio navigation system and a summary of the UK's atomic research outlined in the Frisch-Peierls Memorandum which proved the feasibility of an airborne atomic bomb supported by the relevant calculations of the size of the critical mass required (it was thought at the time that the bomb would be so big that it could only be taken to its target by ship). There were also designs for rockets, superchargers, gyroscopic gun-sights, submarine detection devices, self-sealing fuel tanks and plastic explosives.


The Tizard Mission was carried out despite strong reservations by Winston Churchill and Robert Watson-Watt, the radar pioneer, and although it was hailed as a success, the UK gave away technology that had immense commercial value after the war. In return it got help particularly with radar, a stronger Anglo-American alliance at a very critical time and a seat at the table of nuclear powers.

While Tizard was away, his job was abolished.


1940 Development started on LORAN, the LOng RAnge Navigation system which was one of the Rad Lab's first projects and its only major non microwave project. It was a development of the British Gee system whose design details were provided to the US as part of the Tizard Mission. The Gee system was designed for short range bombing missions and transmitted in the frequency range 20 to 85 MHz (15 to 3.5 meters wavelength) achieving a range of 400 miles. LORAN used the same hyperbolic grid system as Gee but was designed for long range radio navigation over the oceans. It used a frequency range of 1850 to 1950 kHz (150 to 160 metres wavelength) which enabled a range of 1200 miles but with lower accuracy, particularly at the range extreme which depends on the ground wave effect and ionospheric reflection. (Longer waves are bent more around the curvature of the Earth due to the ground wave effect which tends to slow the radiation wave front near the ground and by the reflection from the ionosphere resulting in greater propagation distances.)


The project was led by John Robinson Pierce of Bell Labs assisted for eight months by Robert Dippy, the designer of the Gee system, and was supervised personally by the Rad Lab founder Alfred Loomis himself.

The system went live in 1943.


1941 Silver oxide- Zinc (Mercury free) primary cells developed by French professor Henri André using cellophane as a semi permeable membrane separator which impeded the formation of dendrites which caused short circuits.


1941 Bell Labs researcher Russell S. Ohl discovered that semiconductors could be "doped" with small amounts of foreign atoms to create interesting new properties. He discovered the principles of the P-N junction (with some hints from Walter Brattain) and invented the first Silicon solar cell, a P-N junction that produced 0.5 volts when exposed to light. Ohl's invention of the semiconductor junction and his explanation of its working principles laid the foundations on which the invention of the transistor was based. Unfortunately, Ohl's essential contribution has almost been forgotten.


1941 American inventor B.N. Adams filed for a patent on the water activated battery. Working at home, he had developed the battery for military, marine and emergency us and he demonstrated it to the US Army and Navy. Unfortunately the US Army Signal Corps declared the invention to be unworkable. Nevertheless Adams was awarded a patent in 1943. At the height of World War II however the US Signal Corps decided the idea was indeed feasible after all and the government entered into procurement contracts with several battery making companies without informing Adams. He subsequently discovered in 1955 that his invention had been in use for some time by the US government who by then claimed the idea lacked novelty and was obvious and was therefore not patentable. In 1966 Adams sued the US government and the Supreme Court found in his favour and his 1943 patent was upheld.


1941 Patent granted to American inventor Harold Ransburg for the electrostatic spray coating process in which the paint is electrostatically charged and the surface to be painted is grounded. An idea first proposed by Nollet in 1750. Because of the electrostatic attraction between the positively charged paint and the grounded body the majority of the paint reaches its target resulting in major savings.


1941 Thick Film Circuits developed by Centralab division of Globe-Union Inc in the USA - An innovative use of screen printing technology patented in 1907. They used resistive inks and silver paste printed on ceramic substrates to form printed resistors, capacitors, links and other components in miniature circuits used in proximity fuses. Similar printing processes are used today to manufacture thin film batteries.


1941 American chemist Glenn T. Seaborg investigating the transmutation of Uranium-238 in small scale experimental subcritical reactor piles at the Metallurgical Laboratory at the University of Chicago, isolated Plutonium-239.


The previous year, working with colleagues Edwin M. McMillan, Joseph W. Kennedy, and Arthur C. Wahl at the Berkeley Radiation Laboratory in California he had discovered microscopic traces from a few atoms of Plutonium when bombarding Uranium with a beam of deuterons in the lab's 60 inch (150 cm) cyclotron particle accelerator.

(Deuterons (2H1) are the nuclei of heavy Hydrogen, an isotope of Hydrogen, consisting of a single proton and a single neutron.)

Note the distinction between discovery and isolation. Discovery refers to the first nuclear and chemical proof of the existence of atoms of a new element, often confirmed only by particle counters or electronic instruments, while isolation is the procurement of the first weighable amount in pure form.


Seaborg's investigations into the properties and production of Plutonium were continued at Chicago's "Met Lab" where Szilárd and Fermi were developing the Chicago pile CP-1. By the time of his arrival in Chicago, Berkeley had produced only micrograms (less than one thirty millionth of an ounce) of Plutonium, so small that it could not be seen even under a powerful microscope. Seaborg's task was to to identify its characteristics, including whether it was fissionable, and to scale up the production from micrograms to kilograms, a billion times more than had previously been produced.

He maintained the method of producing the transmutation by the absorption of another particle into the Uranium nucleus, but he substituted the beam of deuterons previously used as a source of these particles by performing the transmutation in an atomic pile, using instead the blizzard of neutrons, created by the Uranium fission reactions in the pile, as the source.

He determined that while the isotope Uranium-238, which forms 99% of natural Uranium, is not normally fissionable, it can be transmuted in a three stage disintegration process, initiated by bombarding it with slow neutrons, which do not have sufficient energy to cause fission, into Plutonium-239 which is readily fissionable. The reaction proceeds as follows:


  1. Neutrons are absorbed into Uranium-238 creating Uranium-239, a more unstable isotope of Uranium with a half life of 2.5 minutes.
  2. 238U92 + 1n0 239U92

  3. This Uranium-239 nucleus quickly disintegrates by beta decay in which a neutron is converted into a positive proton by the ejection of an electron with its negative charge. This transforms the nucleus into Neptunium-239 which has a half life of 2.36 days.
  4. 239U92 239Np93 + 0e-1

  5. This is quickly followed by a second beta decay in which the Neptunium-239 is transformed into Plutonium-239 which has a half life of 24,000. years. )
  6. 239Np93 239Pu94 + 0e-1

    Because of its relatively short half life, Plutonium is not found naturally. (By comparison, the half lives of Uranium-235 and Uranium-238 are 700 million years and 4.5 billion years respectively which explains why there is so much more of it around.)


In nuclear power plants, even with highly enriched Uranium-235 fuel, Uranium-238 still makes up more than half of the total fuel load and since the fission of the Uranium-235 produces excess neutrons, it is inevitable that such reactors will produce Plutonium-239 as a byproduct, albeit in small quantities.

Plutonium-239 decays naturally into Uranium-235 with the emission of alpha particles. Plutonium isotopes also emit neutrons, beta particles and gamma rays.


Seaborg went on to demonstrate that Plutonium-239 is not only fissile, but has a higher probability for fission than Uranium-235. Fission of Plutonium can also be initiated by fast neutrons as well as slow neutrons resulting in the fission fragments of two or sometimes three smaller elements.


There are now known to be 15 isotopes of Plutonium all of which are radioactive and fissionable with fast neutrons, though only two are fissile (with slow neutrons).

A slow neutron can split Plutonium-239 into Barium-142 and Strontium-95 with the emission of 3 fast neutrons and energy of 207 MeV which is not much different from the 3 fast neutrons and 200 MeV of energy released by the fission of Uranium-235.

The average number of neutrons emitted per fission event is 2.88 for the Plutonium compared with 2.45 for the Uranium, however because fast neutrons can also inititiate fission in Plutonium, there will be many more available neutrons so that a Plutonium chain reaction can be achieved with less than one third of the critical mass of fuel. See more about Plutonium fission.


Seaborg and McMillan were jointly awarded the Nobel Prize in Chemistry in 1951 "for their discoveries in the chemistry of the transuranium elements".


1942 Building on Chadwick's work, the first controlled, self-sustaining nuclear chain reaction was achieved by a team including Hungarian Leo Szilárd led by Italian Enrico Fermi in an atomic pile, known as the Chicago pile CP-1, set up in a squash court at the University of Chicago. It was a crude structure built during a period of only 16 days between November 16 and December 1, achieving criticality the following day.


Fission occurs when slow-moving neutrons collide with a fissile material such as Uranium-235 causing its atomic nuclei to split with the release of energy and additional fast-moving neutrons. Controlled nuclear fission is initiated when fast neutrons resulting from the natural disintegration of the radioactive fuel are slowed down by the essential moderator material surrounding the fuel to produce slow neutrons which go on to split further nuclei resulting in the ejection of more fast neutrons and a self-sustaining nuclear chain reaction with the release of more energy at every step. If this chain reaction goes too fast, it becomes an atomic explosion, but under control it could produce a steady flow of energy. An essential requirement of the reactor was the ability to control the rate of the reaction and to shut it off in case of emergency. Fermi found that Cadmium would absorb neutrons. If the chain reaction speeded up, Cadmium rods could be inserted into the pile to slow the reaction down and could be removed to accelerate it again.


The pile was initially conceived with a spherical shape to minimise the critical mass needed for the reaction. It was constructed from 45,000 high purity graphite moderator bricks, weighing a total of 330 tons, stacked in 57 layers and was fuelled by natural (unenriched) Uranium which meant that a very large quantity was required. This included 4.9 tons of Uranium metal and 41 tons of Uranium oxide formed into 9,000 pieces which were held in cavities in the moderator bricks and evenly distributed throughout the layers. The layers were arranged with one layer of solid bricks alternating with two layers of bricks incorporating cavities designed to hold the cylindrical shaped fuel slugs. There were also slots between the bricks to accomodate ten 4 m (13 feet) long Cadmium control rods which were made by nailing Cadmium sheet to wood strips. Because this was such advanced and secret technology, the production of the exotic new materials needed for the reactor depended on novel processes. Consequently they were in short supply and batches could be of variable quality. There were inevitably other previously unknown risk factors to be expected. As the construction progressed the neutron count from the radioactive fuel was monitored with a locally designed Boron trifluoride detector, as well as Geiger counters, and the placements of the bricks were improvised by trial and error to take account of the varying purity of both the graphite and the fuel to ensure an even distribution of the neutron flow within the pile. As a result of these modifications, the pile turned out to be circular as seen from above but elliptical as seen from the side, 6.1 m (20 feet) high and 7.6 m (25 feet) in diameter at its widest point and 1.8 m (6 feet) across at the top and bottom. Because of its loose bricks and top heavy shape, resting on a narrow base, it was enclosed in wooden scaffolding to keep it stable.

See a drawing of the Chicago CP-1 Atomic Pile


The experiment commenced on December 2 when Fermi called for all the control rods to be extracted, except for one which was managed by physicist George Weil. Fermi monitored the neutron count during this process comparing it to the expected value he had calculated and also to previous measurements taken during the construction of the pile. When the slow "click-click" of the neutron counter confirmed his expectations, he proceeded by instructing Weil to withdraw the last control rod step by step in increments of 15 cm (6 inches). At each step he observed that the neutron count slowly built up to a higher level than the previous count and settled at that that higher level and while this was going on he was making calculations on his slide rule to confirm that the level was in line with his expectations. (No electronic calculators or computers in those days). At each stage the clicks came more and more rapidly until, when the control rod was about halfway out, they began to merge into a roar of white noise at which point Fermi calmly confirmed that the pile had gone critical and instructed Weil to reinsert the control rod to shut off the reactor, otherwise it would have continued to increase. It had run for about 4.5 minutes generating about 0.5 watts of power. Repeating the experiment a few weeks later the reactor produced a level of 200 watts maximum.

It was estimated that the Chicago reactor had achieved a neutron multiplication factor k of 1.0006. Note that nuclear reactors used in power generation are designed to work at the point of criticality with a k of 1.0.


Surprisingly, Fermi's nuclear reactor did not have any cooling system or radiation shield and it has been said that, if the Cadmium control rods had failed or if they had got their calculations wrong, half of Chicago could have been blown up. This risk however is overstated. The safety of this initial pile was not just dependent on the Cadmium control rods. The reactor had just enough of the scarce and expensive new materials to achieve critical mass and Fermi's construction was such that any overheating was designed to cause deformation or disintegration of the reactor pile, destroying the critical mass concentration, before enough fissions have occurred to build up enough energy to cause an explosion. Nevertheless, as a precaution, physicist Norman Hilberry stood poised with an axe during the start-up, ready to cut a rope and release more Cadmium control rods that would stop the reaction in an emergency. If all else failed, a three-man "suicide squad" of physicists stood ready to drench the pile with Cadmium sulphate.


This event marked the birth of the nuclear power industry and also the atom bomb.

In December 1944 Fermi and Szilárd jointly filed a patent with the U.S. Patent Office for the neutronic reactor in which they described the method by which a self-sustaining nuclear chain reaction had been achieved. The first atomic pile.

The highly classified patent was finally declassified and issued in May 1955, almost 11 years after it had been filed, by which time Fermi had been dead for six months.


See more about Nuclear Energy - The Theory


1942 American chemist Harry Coover working on materials for optically clear gun sights accidentally discovered cyanoacrylate a fast acting transparent adhesive. It proved too sticky for the job in hand and its true potential was not realised until 1958 when it was marketed as Superglue. Now used extensively in industry for gluing together small sub-assemblies such as battery packs.


Superglue's ability to stick skin together was turned from a problem into a benefit during the Vietnam War (1959-1973) saving the lives of countless soldiers when it was used in to seal battlefield wounds before the injured could be transported to a hospital.


1942 American chemists William Edward Hanford and Donald Fletcher Holmes working at du Pont de Nemours invented the process for making the multipurpose material polyurethane. Now extensively used as a foam insulating material in a wide range of applications.


1942 Glamorous Hollywood movie star Hedy Lamarr, born Hedwig Kiesler Markey in Vienna, and American composer and concert pianist George Antheil, were granted a U.S. patent for a secret communication system which was the first to use frequency hopping as a method of avoiding jamming (deliberate interference) by the enemy. Constant switching between different transmission frequencies by the communicating parties prevents the jammer from knowing which frequency to attack. Their initial application was a guidance system for torpedoes which was offered to the US Navy.

The mechanisms used to control the frequency hopping were two synchronised paper rolls, similar to those used to program pianolas (player pianos) at the time, one in the transmitter and one in the receiver. The communications frequency was determined by capacitors in the tuning circuits of the transmitters and receivers which each contained a bank of capacitors from which individual capacitors could be selected. The appropriate capacitors from the bank were connected to the tuning circuits for controlled intervals by switches turned on and off in a sequence determined by punched holes in the pianola rolls giving a possible 88 distinct frequencies.

Once the communications link was established, guidance was by conventional remote control circuits.


How did such an unlikely pair come to invent such a product?


Hedy had been married in 1933 at the age of 19 to Jewish born, Fritz Mandl, an extremely wealthy and influential Austrian international arms dealer. Well educated and beautiful (See Hedy) she was already famous in Austria for the film "Ecstasy", released earlier in the year in which she had appeared naked, (the world's first). Mandl installed her as his trophy wife in his substantial estate where she was expected to play the perfect hostess to the high level government delegations who were invited to discuss arms supplies in his luxurious surroundings. Like an exotic bird in a gilded cage, Mandl demanded her presence at all times, even during technical discussions and business negotiations, so she was most likely aware of some of the technical issues involved. After four years in Mandl's clutches she escaped, making her way to America where she successfully took up acting once more and where in 1941 she met Antheil by chance.

George had spent over ten years in Paris and was an acquaintance of many of the great artists, writers and composers of the day. His early compositions were outrageously avant garde and and among the vast array of instruments called for in his Fourth Symphony the "Ballet Mechanique" were seven electric bells, a siren, three aeroplane propellers, gongs, two pianos and sixteen synchronised pianolas.(See George)


Hedy and George both spoke German. She was born into a Jewish family in Austria, George's family were Polish immigrants in the USA and in the early days of the Second World War they both wanted to do their bit for the war effort before the U.S. entered the war. Between them they had only superficial knowledge of the technologies involved yet they pioneered what was to become an essential component of secure military communications and eventually a subsystem of modern spread spectrum and cellular communications.


Unfortunately they got very little credit for their ideas. They had no credibility as engineers, Hedy was an alien and their loyalties were suspect and besides the U.S. Navy's torpedoes were powered by compressed air, while the German torpedoes were electrically driven. Their patent eventually expired in 1957 without earning them any revenue, just about the time their ideas were picked up and exploited by Sylvania and others for "classified" military applications. Adoption of the technology for commercial applications was hampered by the reluctance of the U.S. Federal Communications Commission (FCC) to allocate sufficient frequency spectrum for its use.


Spread Spectrum Applications

  • Security
  • Hedy and George's system solved the security problem by spreading the signal to be transmitted over multiple frequency carriers and frequency hopping between them, but it required a much wider system bandwidth for the communications.

    But just as Hertz did not envisage the use of radio waves for communications and Rutherford did not foresee the possibility of generating nuclear power from nuclear fission, they did not anticipate two more important potential applications of their invention, which are key to modern communications systems, which were made possible by its broadband signal channels.

  • Multiplexing
  • The first alternative application seems fairly obvious and could have been implemented using their pianola rolls. By using more than one, independent, code sequence provided by more pairs of pianola rolls, they could have transmitted more than one simultaneous message over the broadband link. This is a very simple example of a code division multiple access (CDMA) system used for multiplexing. Problems could possibly occur if the multiplexing codes happened to result in two messages at some point being assigned the same frequency, but this could be avoided by careful programming of the code sequences or overcome, if it did occur, by blocking the transmission for the corresponding interval and resending the message during the next interval

  • Improved noise performance
  • They can be forgiven for not anticipating the second application. In 1948 Shannon published his mathematical theory of communications in which he outlined the possible noise / bandwidth tradeoff in a communications channel. He showed that system noise performance can be improved by spreading the signal across a greater bandwidth for transmission, a technique which is also used in modern communications systems.

  • Correlation detection and Filter matching
  • In the early 1950s, electronic implementations of spread spectrum technology were mostly for military applications, many of which were classified as secret at the time. Radar systems which need to extract very low level signals reflected by the target from high ambient thermal noise and clutter (extraneous signals or interference) are such examples. The solution was to modulate the transmitted pulse with a pseudorandom code sequence. The reflected signal was fed into a correlation detector together with a delayed reference copy of the same pseudorandom code. The correlation detector gives an output only when the modulation pattern of the its two inputs are precisely matched or correlated, even in the presence of noise levels which may be greater than the signal level, otherwise the output is zero or just noise. The delay between the reference signal and the reflected signal is varied until the correlator indicates a match, at which point the delay corresponds to the two way transmission time of the signal between the transmitter and the target, from which radar range can be calculated.

    Another way of thinking of the correlation process is to consider it as filter matching. Using the piano roll analogy, the signal is only transmitted when there are holes in the transmitter piano roll. A matching piano roll, delayed from the transmitter roll, is used in the receiver and the output is only considered valid if there is a received signal corresponding to every hole (or most of them) in the roll otherwise the output is ignored.


1942 Austrian-born engineer and physicist Rudolf Kompfner working at the Nuffield Laboratory Physics Department of Birmingham University (the birthplace of the cavity magnetron) first sketched out the design for the Travelling Wave Tube (TWT) amplifier which he built early the following year. Similar in some ways to the klystron it was a radio frequency (RF) microwave amplifying tube but with a very wide bandwidth, and was the first to be capable of amplifying high capacity multiplexed telephone voice channels or broadband data and TV channels. It was thus suitable for use in microwave repeater stations enabling the expansion of the telephone network and was later used in onboard satellite communications repeaters.

As in the klystron, the TWT modulates an electron beam travelling between the cathode and the high voltage anode of a vacuum tube but it does not use resonant cavities to launch and capture the microwave signal since, by their very nature, resonant cavities limit the bandwidth of the signal. Instead the TWT RF input signal is coupled to a long narrow helical wire coil inside the tube about 30 cms (1 foot) or more long which forms an RF circuit stretching the length of the tube but not connected to the electrodes. (See diagram of the Travelling Wave Tube). The tube itself is contained within in a cylindrical magnet which maintains the electrons focused in a narrow beam travelling along the centre line of the helix. A directional coupler induces the signal current into the helical coil at the cathode end and another coupler extracts the signal at the anode end.

The helical RF circuit acts as a delay line, in which the RF signal travels at near the same speed along the tube as the electron beam. The electromagnetic field due to the RF signal in the helical coil interacts with the electron beam, causing bunching of the electrons as in the klystron, and the electromagnetic field due to the beam current then induces more current back into the RF circuit thus reinforcing the signal current as it passes along the tube in a process known as velocity modulation. At the output end of the helix, the amplified RF signal is extracted by the second directional coupler.

Waves reflected from the output end of the delay line are prevented from traveling back towards the cathode by attenuators placed along the RF circuit.


The TWT was subsequently refined by Kompfner working with John Pierce and Lester M. Field at Bell Labs.


In 1962, the first communications satellite, Telstar 1, was launched with a 4 Watt, 4 GHz TWT amplifier used in the transponder to transmit the first live television signals across the Atlantic.


1943 The printed circuit board was patented in the UK by Austrian born Jewish refugee Paul Eisler, the acknowledged father and publiciser of the PCB. Most of Eisler's patents were for a subtractive process whereby circuit tracks were made by etching copper foil which has been bonded to an insulating substrate. Like the plug, this simple invention was late in arriving - only four years before the much more complex transistor. There had been many proposed designs for PCBs over the previous 40 years, using a wide range of different processes by Hanson, Berry, Schoop, Ducas, Parolini, Seymour, Franz, Sargrove, Centralab and others, but Eisler's processes were more practical and were quickly adopted by the US Army. Despite this, it was not until the 1950s that the use of PCBs finally took off, helped no doubt by the advent of the transistor.

Some of the processes involved in Eisler's patents were borrowed from the printing industry and some of the patents mentioned above were cited by Eisler in his patent applications and although the use of PCBs was virtually unknown at the time Eisler's patents were granted, they were challenged by the Bendix Corporation in the USA and overthrown in 1963 on the grounds of prior art. Eisler died in 1995 a bitter man.

Eisler held patents for a number of other popular developments, mostly involving heated films, including the rear windscreen heater, heated wallpaper, food warmers for fish fingers and other foods, heated clothes (John Logie Baird got there first with his 1918 patent for damp-proof socks) and also a battery powered pizza warmer for take out pizzas.


It was another six years before dip soldering was invented.


1944 Germany's Terror Weapons, the V-1 Flying Bomb and V-2 Missile were deployed in combat for the first time. The V-1 was essentially a pilotless aeroplane which was championed by the Luftwaffe whereas the V-2 rocket was more like a guided artillery shell and this was the weapon preferred by the army.

There was little crossover between V-1 and V-2 programmes which were developed in parallel and Hitler was not particularly enthusiastic about the rocket program believing that the weapon was simply a more expensive artillery shell with a longer range.


In 1942 the German V-1 Flying Bomb, the precursor of the cruise missile achieved its first successful powered flight in December 3rd 1942, two months after the first V-2 flight. Originally designated as the Fieseler Fi 103 it was renamed the V-1 from the German Vergeltungswaffe 1 - "Vengeance or Retaliation Weapon 1". Basically a pilotless aeroplane it was powered by a pulse jet, air breathing engine. (See image and cutout diagram of the V-1 Flying Bomb). It was only only 7.73 m (25.4 ft) long with a wingspan of 5.33m (17.5 ft) and had a range of 250 km (160 miles) and carried a payload of 850 kg (1,900 lb) of explosives, flying at an altitude of between 600 to 900 m (2,000 to 3,000 ft) with a speed of 640 km/h (400 mph).

The first V-1 was launched towards London on 13 June 1944, one week after the successful D-Day landings in France, and landed in Hackney killing 6 people.


  • The Designers
  • The idea of a flying bomb was first proposed to the German Luftwaffe in 1935 by Paul Schmidt, a pioneer of pulse jet engines but his proposition was rejected. Four years later, at the start of World War II, the idea was proposed once more, this time by Fritz Gosslau who had already developed a remote controlled surveillance aircraft and had been working independently on pulse jet engines with Manfred Christian at the Argus Motor Works in Berlin. This time the proposal met with support but no commitment from the Luftwaffe who helpfully suggested teaming up with Paul Schmidt. Subsequently the team produced a more practical engine in 1940.

    Argus needed help in designing the airframe to carry their engine and in 1942 Robert Lüsser, previously chief designer and technical director at Heinkel, now working for the Fieseler company, took charge of the overall project and produced a preliminary design for the complete flying bomb.

    They still needed a guidance system and this was subcontracted to the Askania company in Berlin where engineers Guido Wünch, Herman Pöschl and Kurt Wilde designed the necessary guidance and control system.

    Overall project control was the responsibility of Berthold Wöhle.


  • The Design
  • Since the missile was expendable, it had to be very inexpensive to make and should use cheap readily available materials and low grade fuel. It was however pushing the bounds of known technology and thus subject to numerous changes and improvements as well as trials of alternative variants during both the development period and its deployment. The following is a description of the main components.


    • The Pulse Jet Engine
    • The pulse jet engine was beautifully simple. Apart from the input flap valves controlling the air supply, it had no moving parts, not even a fuel pump. It consisted of a long stovepipe or jet pipe, open at one end and covered at the other by a grid of 126 very thin, double leaved, spring steel non-return flap valves or shutters. (See Diagram of the Pulse Jet Engine). The fuel was regular gasoline/petrol and a compressed air supply to the fuel tank at a pressure of 100 psi pumped the fuel into the motor. Inside the pipe and close to the closed end was an array of 9 jets which delivered a constant spray of fuel into the pipe. To start the engine and keep it operating while stationary on the starting ramp, compressed air was pumped through 3 nozzles into the pipe and a standard spark plug provided the ignition of the fuel air mixture. Once the engine was in motion, ignition was self sustaining and the spark plug and external compressed air supply were no longer needed since the air supply was drawn into the pipe through the flap valves by the motion of the engine through the air.

      The combustion process was also very simple. The ignition of the fuel air-mixture caused an explosion, or rapid expansion of burning gas. The increased pressure of the expanding gas in the pipe slammed the spring flap valves closed and the high pressure burning gas was ejected from the open end of the pipe in a jet stream thus providing the reactive thrust to drive the pipe forwards (towards the closed end). As the exhaust gas left the pipe, the internal pressure in the pipe would drop and the external air pressure on the flap valves, due to the motion of the pipe through the air, would exceed the gas pressure in the pipe. The resulting differential pressure, assisted by the valve springs, caused the valves to open allowing a new charge of air to enter the pipe. The spark plug was no longer needed for ignition because enough of the previous charge of burning gas remained in the pipe to ignite the new fuel-air mixture. This combustion cycle, or power pulse, was repeated at around 47 times per second which was the resonant frequency of the pipe and gave the engine its characteristic buzzing sound and hence the missile's nickname, the Buzz Bomb or Doodlebug.

      The ignition shutter system was vulnerable to failure because of the severe vibration of the engine but it was not intended to last beyond the V-1's normal operational flight life of one hour maximum. It took only 22 minutes flight time to cover the 225 kilometres (140 miles) between the launch sites at Pas de Calais in France and its targets in London.

      Because the engine power depends on the air pressure, generated by the speed of the jet pipe through the atmosphere, to drive air into the engine, it delivers very little power at speeds below 240 kph (150 mph) and needs an external power boost from a catapult to give the missile enough speed and hence power to get it off the ground. Once airborne the engine could deliver a thrust of 310 kg (683 lbs) flying at 700 kph (435 mph) at an altitude of 1000 m (3280 ft).

    • The Airframe
    • Designed by Lüsser, the airframe was originally constructed almost entirely of inexpensive, welded sheet steel. The pulse jet was mounted above the fuselage which housed the magnetic compass, the warhead, the fuel tank which was integral with the fuselage, two compressed air tanks, a battery, three guidance gyroscopes and a radio transmitter.(See Cutaway diagram of the V-1). To avoid interference with the magnetic compass, the nose of the fuselage was constructed from aluminium, which is non-magnetic, instead of the sheet steel used for the rest of the plane.

      Because the wings were very small, the missile had a very high stall speed of around 300 kph (190 mph). This meant that its take-off speed was correspondingly very high and was a second reason why power assisted launching was needed to accelerate it to a velocity greater than its stall speed.

      To save cost and weight, no ailerons were provided on the wings and the only control surfaces were the elevators and the rudder in the tail.

      The bomb obviously did not need any landing gear.

    • Internal Power
    • On-board power was mainly supplied by means of compressed air stored at over 2000 psi (150 atmospheres) in two large spherical tanks constructed with an internal shell of welded mild-steel sheet, tightly bound over with steel wire to contain the high pressure. Compressed air, supplied via pressure reduction valves, was used to spin the gyroscopes to operate the pneumatic servos driving the rudder and the elevators and for providing the pressure in the fuel tank to pump the fuel into the engine.

      Two 30 Volt batteries supplied electrical power to various relays, sensors and actuators as well as the radio if it was installed.

    • The Guidance System or Autopilot
    • The bomb was directed to its target from launching ramps precisely oriented towards the target. Once airborne, it was kept on track by means of an ingenious guidance system which relied on a magnetic compass to monitor the heading, gyroscopes for stability and a barometric altimeter for altitude control.

      The master gyroscope was a displacement gyro which detected any deviations from the pre-set flight path. It provided error signals which were used in feedback control systems to move the control surfaces of the elevators and the rudder by means of pneumatic servos to minimise the error. The gyro was mounted in a gimbals with its axis inclined at 20 degrees above the horizon making it sensitive to roll as well as pitch and yaw movements. (See more about Gyroscopes and Guidance)

      Drift of the master gyroscope was corrected by the magnetic compass which provided a reference heading. The compass was housed on vibration damping springs contained within a pair of non-magnetic, mating wooden hemispheres.

      Pitch and yaw were controlled by two spring retained, rate gyros damped by dashpots mounted at 90 degrees to the fuselage centre line. As with the master gyro, error signals from the rate gyros caused pneumatic servos to move the elevator and rudder control surfaces so as to minimise the errors. The gyroscope which controlled the elevators was sensitive to pitch only and was mounted on the vertical axis and the rudder gyro which was sensitive to yaw only was mounted on the horizontal axis. Changes in the missile's attitude caused precession in the corresponding rate gyro in proportion to the rate of change of direction. Differential pneumatic signals from the rate gyros were mixed with the signals from the displacement gyro to provide stability by damping any oscillations and preventing the control surfaces from over-shooting.

      Because the plane had no ailerons, roll compensation was provided by the rudder. Since the axis of the master gyro is elevated by 20 degrees to the horizontal in a vertical plane, a roll will cause the tilted gyro axis to move left or right in the direction of the roll and this will appear to the auto-pilot as a yaw. Hence if the left wing should drop, it uses the rudder to steer to the right resulting in a higher velocity of air over the left wing, thus raising it to the normal position. This interaction meant that rudder control alone was sufficient for steering and no banking mechanism was needed.

      Altitude control was provided by error signals from an aneroid barometer capsule which expands with a decrease in ambient air pressure (or increase in altitude). A pneumatic servo caused deviations from the desired altitude to tilt the gimbals of the displacement gyro. The resulting error signal was used to control the pitch of the bomb causing it to rise or fall to its planned altitude.

      A small 2 bladed, windmill propeller on the nose of the bomb drove a counter which determined the distance travelled. When the target area was reached the counter triggered a mechanism to shut off the fuel supply to the engine causing the bomb to dive silently onto its target.

    • The Launching System
    • Ground-launched V-1s were propelled up launch "ski" ramps 42m to 48m (138 to 158 ft) long inclined at 6 degrees by a steam piston hooked onto the fuselage. To minimise detection by the enemy, the ramps were built very short and, since the stall speed of the bomb was so high and its low speed power so low, it needed a massive acceleration from an external source, in the short distance available, for it to reach take off speed. This assistance was provided by a steam piston designed by Hellmuth Walter, which used a hypergolic (self igniting) mixture of hydrogen peroxide and an aqueous solution of calcium, sodium or potassium permanganates to create vast quantities of steam which drove the piston forwards at high speed accelerating the missile to its take-off speed of 395 kph (245 mph)

      To avoid the use of ramps, some V-1s were air launched from two-engined Heinkel He-lll bombers to achieve the necessary high launch speed. This was acceptable for test purposes and for manned versions since the pilots could not tolerate the excessive g forces experienced by a 'ski' launch, but it was not particularly practical for operational purposes because the difficulty of determining the precise location reference coordinates for setting the bomb's guidance system led to wide inaccuracies in targeting.

    • Accuracy
    • Unlike the V-2 rocket guidance, the V-1 guidance system was active over the bomb's full trajectory until it was over its target.This gave it superior accuracy to the V-2. Nevertheless, despite its sophisticated guidance system the V-1's (CEP) Circular Error Probability, (defined as the radius of a circle, centred about the mean, whose boundary is expected to include the landing points of 50% of all of the missiles launched) was 13 kilometres (8 miles). This means that the V-1 was incapable of hitting specific targets and caused indiscriminate damage to the civilian population.

    • Development and Production
    • The V-1 was developed by the Luftwaffe at the Army Research Centre at Peenemünde on Germany's Baltic coast.

      Manufacturing of the major assemblies was initially carried out in the Fieseler and Volkswagen factories but after these facilities were bombed by the RAF in August 1943, production was transferred to the less vulnerable, but notorious, underground Mittelwerk plant near Nordhausen where it was carried out by slave labour from the nearby Mittelbau-Dora concentration camp.

    • Variants
    • Because of wartime shortages of key materials, in production, plywood was substituted for constructing the wings. For similar reasons a variety of explosives were used in the warhead.

      Radio controlled guidance was considered but ruled out since it was vulnerable to jamming by the enemy and the necessary radio beacons providing the navigation signals could be disabled or destroyed by enemy action. The inertial guidance system was chosen because it was autonomous and independent of any communications with the ground. Radio transmitters were however installed in some later versions to provide telemetry about the locations of the impacts of the bombs.

      Aware that the enemy could locate the launch sites by tracking the trajectory of the missile which was aimed directly at the target, a steerable version of the V-1 was also being developed to enable the bomb to change course and confuse the tracking radar, but the war ended before this was ready.

      Besides the air launched versions, several longer range models were produced.

      Also several manned versions were built, one piloted by famous, daring aviatrix, test pilot and Nazi poster girl Hanna Reitsch, was used for investigating the aerodynamic and control properties of the plane. Other manned versions were intended for attacking high value targets,but they were considered to be suicidal even for simple flight testing. Meanwhile, elements in the Luftwaffe were planning a suicide division and 70 volunteer pilots had signed up, but Albert Speer, head of the German war industry, persuaded Hitler that such missions were not in the tradition of the German warrior and so the idea of the piloted V-1 was abandoned.


  • Effectiveness of the V-1
  • The V-1 was an elegant, low cost engineering solution to a complex problem but because it was a single use projectile with a limited payload, its cost effectiveness in terms of cost per target destroyed could not compare with the cost and accuracy of conventional bombers which carried out multiple sorties with much greater bomb loads.

    Over 30,000 V-1s were produced between June 1944 and March 1945 with around 10,000 fired at targets in Britain. Of these only 2,419 reached London killing 6,184 people and injuring 17,981. The Belgium port of Antwerp was also a major target and was hit by 2,448 V-1s between October 1944 and March 1945. A total of around 9,000 were fired at targets in Continental Europe.

    The cost of the V-1 missile was only one sixth of the cost of the V-2 rocket and it carried a similar payload but it was inaccurate, slow and vulnerable to interception by fighter planes of the day and to anti-aircraft fire. However because it was so small it was a difficult target to hit.

    Its launch sites were also difficult to camouflage and were pounded by Allied bombers. The high g forces of up to 22 g experienced during launch and the severe vibration from the pulse jet during flight were a major sources of unreliability.

    The death rate inflicted on London was only 2.6 deaths per bomb but, because of the menacing noise of its engine which announced its approach, and its indiscriminate effects, it had the terrifying impact on the population. The V-1 was thus largely a terror weapon and had little overall impact on the outcome of the war.


In 1942 the German V-2 Missile, the world's first long range ballistic missile and the progenitor of all modern rockets, was successfully fired, without a warhead, for the first time on the 3rd October 1942. Two months later Adolf Hitler signed the order approving it for production. It was a single stage rocket fuelled by alcohol and liquid Oxygen (LOX) producing 25,000 kg (55,100 lb) of thrust at lift off. Burning 58 kg of alcohol and 72 kg of Oxygen per second for 65 seconds the rocket motor would propel the missile to an altitude of 93 km (58 miles) at speeds up to Mach 5 and drop one ton of high explosive on a target up to 320 kilometres (200 miles) away just five minutes after launch.

It was also the first known human artifact to enter outer space reaching an altitude of 189 kms (117 Miles) in tests designed to measure cosmic rays, meteoroid flux and to explore conditions in space.

Known originally as the Aggregat 4 - "Assembly" 4 or the A-4, it was dubbed by the Propaganda Minister Josef Goebbels as the Vergeltungswaffe 2 - "Vengeance Weapon 2".

Its first operational flight was aimed at Paris on September 7, 1944 three months after the first V-1 entered service, but it did not reach its target. The next day, V-2s were launched against both Paris and London, The first missile landed at Charentonneau south-east of Paris killing six and the second landed at Chiswick in West London killing three people.


The V-2 rocket had its roots in Germany's enthusiastic rocket societies such as the Verein für Raumschiffahrt (VfR) - "Society for Space Travel" and inspired by Hermann Oberth's landmark book Die Rakete zu den Planetenräumen - "The Rocket into Interplanetary Space" published in 1925. Developments in the 1920s attracted the attention of the German military establishment, still smarting under the severe restrictions imposed by the Treaty of Versailles after World War I on the weapons Germany was allowed to use. Rocket technology was not included in these restrictions and rockets were seen as potentially superior weapons to artillery, having a longer range and greater mobility.

Oberth's publication received a much more sympathetic response in Germany, than Goddard's similar publication did six years earlier in his native USA.


  • Building the Team
  • The development of Germany's military rocket technology was led by German Army Artillery Officer Walter Dornberger. In 1932 he began to recruit prominent members of the VfR to develop a series of experimental rockets for the army at his weapons development site at Kummersdorf near Berlin.

    The first three recruits included a recently graduated 19 years old, Prussian aristocratic rocket enthusiast and theoretician Wernher Magnus Maximilian, Freiherr von Braun to whom Dornberger gave a grant to study "Construction, Theoretical, and Experimental Solution to the Problem of the Liquid Propellant Rocket"; for which von Braun subsequently received a PhD for his thesis in 1934.

    Joining him were Heinrich Grünow, an exceptional mechanic who could translate ideas into hardware and Walter Riedel, an engineer with the Heylandt Company which produced liquid oxygen, he was an early experimenter with rocket motors which he had used in a rocket propelled car. Also known as "Papa" Riedel he became head of the technical design office and deputy to von Braun even though he lacked formal qualifications.

    As Dornberger's team expanded, in 1934 Arthur Rudolph, another of Heylandt's engineers who had designed a prototype liquid fuelled rocket motor for the army also joined the team as head of the Development and Fabrication Laboratory where he specialised in production and in the same year, gyroscope expert Johannes Maria Boykow technical director of Kreiselgeräte - "Gyro Devices" company was invited by von Braun to join the rocket team to take responsibility for guidance and stability.

    In 1936 chemical engineer, working in the army's Ordnance Research Test Section, Walter Thiel was transferred to Dornberger's team to develop a new high power engine.

    He was followed in 1937 by an old friend of von Braun, Klaus Riedel (no relation to Walter) who had been working at Siemens after playing a major role in rocket development at "Raketenflugplatz Berlin" - the launch site of the VfR. Riedel took up the position of Head of the Test Laboratory.

    The same year Rudolf Hermann who had built a supersonic wind tunnel at the Technical University of Aachen was recruited as chief aerodynamicist to develop a similar facility for Dornberger's team.

    In 1938 the army's weapons research establishment was relocated to Peenemünde on the Baltic coast and the German Ordnance Department requested that the Peenemünde team develop a ballistic weapon with a range of 200 to 300 kilometres and a payload of one ton.This was the birth of the V-2.

    In 1939 Hermann Steuding a mathematician from Darmstadt Institute of Technology (DIT) joined the group to set up an Aeroballistics and Mathematics Laboratory and he in turn recommended his friend, Flight Captain Ernst Steinhoff a specialist in aeronautical engineering at DIT who joined the group to take charge of the Guidance, Control and Telemetry Laboratory which had been without a leader since Boykow's untimely death in 1935.

    In 1940 electronics engineer Helmut Gröttrup joined Steinhoff's team in charge of electrical and flight control systems.

    Each of these men brought with them experienced fellow engineers and together they formed the initial core of the team which eventually developed the V-2.


  • The Design and the Technology (See photo and cutaway diagrams of the V-2 Rocket).
  • The V-2 was an exceedingly complex machine pushing the boundaries of existing technology on several fronts and was subject to constant changes as the development progressed. The exigencies of war and the consequent political pressures forced the adoption of unrealistic production targets so that it was introduced when it was far from ready and some 65,000 engineering changes were made to the missile design between the decision to put it into production in October 1942 and the end of the war. Procurement of complex, precision parts made from exotic materials during times of severe scarcities only added to the problems. Considering that only 6,152 V-2s were built and only 3,170 were used in anger, very few of them were the same, a situation which was of great concern to the engineers involved.

    The main components are described below.


    To SKIP THE TECHNICAL DETAILS – JUMP TO Scheming, Barbarism, Collapse, the Score and the Reckoning (Hit the BACK BUTTON to return here.)


    • The V2 Engine
    • The rocket motor design was the result of many years experimenting with different fuels, combustion chamber and nozzle shapes and sizes, fuel pumping systems, injector designs and cooling systems carried out by Walter Thiel and his team. The motor was designed to generate about 55,000 lbs (24,947 kg) of thrust on start up increasing to 160,000 lbs (72,574 kg) when the maximum speed was reached. The motor typically burned for only 60 to 65 seconds, pushing the rocket to a speed of around Mach 5.

      The fuel chosen was a mixture of 75% ethyl alcohol (ethanol), derived from fermenting potatoes, with 25% water. Thirty tons of potatoes were required to manufacture the fuel for each V-2. The oxidiser was liquid Oxygen (LOX). The water was added to the alcohol in order to reduce the temperature of the combustion gases and had a limited effect on the engine performance.

      The engine consisted of three parts, a spherically shaped combustion chamber which opened out into the main rocket thrust nozzle, the injectors for atomising the fuel mixture and feeding it into the combustion chamber and the pump for delivering the fuel and oxidiser to the injectors.


      The Combustion Chamber - Thiel was able to reduce the length and weight of the combustion chamber and nozzle be ensuring better atomisation of the propellants which resulted in faster burning so that it was no longer necessary to have a long burning path to ensure complete combustion of the fuel. The burning fuel however reached temperatures of 2500 - 2700 °C (4500 - 4900 °F) which was hot enough to melt steel.


      Thermal Management - To contain the extreme temperature combustion products, regenerative cooling was use to cool the walls of the combustion chamber and exhaust nozzle. This was accomplished by using double skinned walls for the combustion chamber and nozzle with a cavity between the inner and outer walls to act as a cooling jacket. The fuel mixture of alcohol and water was fed through a manifold near the base of the nozzle to circulate through the cavity on its way to the injectors at the top of the combustion chamber, simultaneously cooling the chamber walls while pre-heating the fuel to improve combustion efficiency. This method alone however was insufficient to avoid hot spots on some sections of the chamber walls.

      To overcome this remaining problem Moritz Pöhlmann, one of Thiel's engineers, suggested injecting some of the fuel directly into the chamber through perforations in the chamber walls where the fuel evaporated creating a local cooling effect and forming a cooled boundary layer which further protected the wall of the chamber. The technique is now known as film or veil cooling. 10% of fuel flow was used for film cooling and this reduced the heat transfer to the chamber walls by approximately 70%


      Fuel Mixing and Combustion - The purpose of the injectors was to create controlled combustion of the propellants, avoiding any tendency to explode while at the same time optimising the efficiency and speed of the burning process. To achieve this both the fuel and oxidiser have to be evenly distributed throughout the combustion chamber and they need to be atomised into very small droplets to facilitate mixing as well as to increase the burn rate of the fuel.

      In the V-2 engine, the fuel and oxidiser were pre-mixed in eighteen large bell shaped injector "pots" arranged in two concentric rings on the dome of the combustion chamber. The open end of each bell was mated to a similar sized aperture in the combustion chamber through which the atomised mixture of fuel and oxidiser passed.

      Like the combustion chamber, the body of each injector pot was also double walled and the fuel passed into this injector cavity at high pressure from the cavity around the combustion chamber. A series of small nozzles around the circumference of the inner wall of each injector body directed fine swirling jets of fuel droplets towards the centre of injector chamber. At the centre of the closed end of the bell, the oxidiser was fed at high pressure into the centre of injector chamber through a single perforated cup-shaped injector, similar to a shower head, which separated the oxidiser in to tiny droplets and directed them against the jets of fuel coming in the other direction from the wall of the injector pot. Burning did not take place until the mixture entered the combustion chamber.

      The Oxygen was supplied from a turbopump to the injectors through eighteen large pipes.


      The Rocket Thrust Nozzle - The bell-shaped exhaust nozzle is designed to extract the maximum thermal energy from the exhaust stream, converting it into kinetic energy by reducing its temperature and pressure, thus increasing the exhaust velocity and improving the overall efficiency of the rocket.

      The exhaust gases emerge from the aperture in the combustion chamber at high pressure and temperature travelling faster than the speed of sound. As they enter the wider, expansion part of the nozzle, we should expect the exhaust temperature and pressure to be reduced by the Joule Thomson Effect which dictates that increasing the volume of a compressible fluid, reduces its temperature and pressure. (As in a refrigerator). On the other hand, expanding the flow of a non-compressible fluid, such as water, after passing through a constriction would reduce its speed. However the exhaust gas is not a non-compressible fluid. When the flow rate of the rocket exhaust is faster than the speed of sound, the effect of the lower pressure, due to the expansion of the gas, can not propagate back through the nozzle aperture and the Conservation of Energy Law dictates that the reduction of the pressure and temperature energy is compensated by the increase in the kinetic energy of the gas flow and hence its speed.

      See more about Gustaf de Laval and the development of nozzles.


      A Steam Engine in Space - Previously, all liquid fuelled rockets had used pressurised fuel tanks to feed the propellants into the engine, but this method could not produce the flow rate of 58kg (128 lbs) per second of alcohol and 72 kg (160 lbs) per second of LOX necessary to generate the 25,000 kg (55,100 lb) of thrust needed for the V-2.

      Von Braun came up with the idea of using turbopumps to perform this function after seeing similar pumps used by the fire service to provide high pressure water jets. The alcohol and the LOX were delivered to the combustion chamber by two rotary pumps, driven by a central 580 horsepower steam turbine all mounted on the same shaft running at 3,800 rpm.

      The steam to power the turbine was raised by a hypergolic (self igniting) mixture of Hydrogen peroxide (80 %) and water (20%) reacting with a solution of Sodium permanganate (66%) with water (33%) which produces large quantities of steam as in the steam piston catapult developed by Hellmuth Walter for the V-1 launching system.

      Pumping these hypergolic liquids into the mixing chamber still employed the older method of pressurised fuel tanks with compressed air or Nitrogen used to provide the necessary 32 atmospheres of pressure.

      The pumping assembly had to cope with extreme temperature difference between the +425 °C of the superheated steam in the turbine and the -183 °C of the LOX in one of the pumps. Furthermore, special seals, gaskets and bearings had to be developed since the pure Oxygen causes the breakdown of organic seals and lubricants. Though the turbopumps replaced the pressurised fuel delivery system, the fuel and oxidiser tanks still had to be pressurised with nitrogen to prevent cavitation in the powerful turbopumps.


      Thiel's death in the RAF bombing raid on Peenemünde in August 1943 was a big loss to the team.

      While technically his motor was very efficient, the complex, 18 injector 'basket-head' design and the plethora of hand-formed pipes needed to feed it were not suitable for volume manufacturing and it was a plumber's nightmare to assemble. Furthermore the long rigid fuel and LOX pipes made it impractical to mount the motor on a gimbals which was the preferred method for directional control of the rocket.

      In 1942, Thiel had already begun working on the development of a Mischduese ("mixing nozzle") injector plate, a much simpler system for injecting the propellants into the chamber consisting of a simple flat injector plate, containing rows of fine holes, mounted at at the head of the chamber. This eliminated the multitude of fuel and oxidizer feeder lines producing a cleaner design with better mixing properties and improved reliability and it also allowed for gimballed suspension of the rocket motor. This new design unfortunately had combustion instability problems which could not be overcome before the end of the war and as a result the original basket head design had to be put into production instead.

      Thiel's injector plate concept is the basis of today's advanced rocket motors such as those used in the Saturn V.

      After the death of Thiel, Martin Schilling took over responsibility for the engine development.


    • Ballistics and Aerodynamics
    • The trajectory of a ballistic missile is only controllable during the launch period during which it accelerates to a target velocity pointing in the right direction at which time the fuel supply is shut off. Once the motor is switched off, the missile coasts towards its target following (almost) the typical ballistic trajectory of an artillery shell determined by its velocity and direction of flight at the moment of switch off. After this point there is no possibility to alter course to correct for any initial guidance or alignment errors, drift, side winds, headwinds, tailwinds, or any buffeting by the atmosphere as the missile arcs upwards to its apogee then descends to Earth.

      In the case of the V-2, the flight time to the target is around 5 minutes but the rocket is only powered for the first 60 seconds of the flight and the guidance control is therefore only effective during this period during which it covers 27 kilometres (17 miles). For the remaining 290 kilometres (180 miles) to the target, the missile is in free flight subject to atmospheric conditions. This placed limitations on its targetting accuracy.

      (See Launch Sequence below)


      Simple ballistic calculations for projectiles such as artillery shells assume that the gravitational force is constant and that the projectile is not subject to aerodynamic drag or wind conditions so that it follows a parabolic path in a vertical plane. These assumptions are not valid for a missile which travels very fast over long distances and reaches very high altitudes above the Earth's atmosphere and the following effects, all of which affect targetting accuracy, have to be taken into account for setting the missile's velocity and direction at the start of its ballistic flight.

      • The rocket travels very fast through the atmosphere where it is subject to very high drag shortening its range.
      • The density if the air decreases with altitude so that the drag also decreases with altitude and above the atmosphere there is no drag. The parabolic path of the V-2 takes it above the level of the stratopause (50 -55 kms) where the air density is only 1/1000 of that at sea level. Flying above the atmosphere enables maximum range to be achieved.
      • Aerodynamic control surfaces such as wings, fins and rudders do not work where there is no air.
      • At very low speeds, immediately after lift off, the effectiveness of aerodynamic control surfaces is also very low or impractical.
      • Heavy rockets can not take off at the desired angle of elevation necessary for optimising the range because the initial speed does not built up quickly enough and aerodynamic forces on the control surfaces are too weak to stabilise the vehicle and keep it airborne against gravity and wind so that the rocket would topple over. They must therefore take off vertically and be subsequently tilted to the optimum launch angle as the speed increases to the desired launch speed before entering their ballistic flight path.
      • Note that although the propulsive force of the rocket motor is reasonably constant while it is in operation, the acceleration of the rocket actually increases during the flight. This is because the mass of the rocket decreases as the fuel is used up so that the same force delivers greater acceleration. (Newton's Law:  F = M.a). This also explains why rockets may be very slow to rise from the launching pad, since that is when the rocket's weight is at its maximum.

      • Control of the rocket's attitude or direction above the atmosphere or at very low speeds needs control of the direction of the rocket exhaust or the use of ancillary thrusters.
      • The gravitational force pulling the rocket downwards is not constant during the flight but decreases with altitude, thus increasing the missile's range. The apogee of the V-2 trajectory at its maximum range is 94 kilometres (58 miles), just below the Kármán Line (100 kms), and the gravitational force at the apogee is 2.9% less than the force at ground level.
      • The ballistic flight of the rocket starts at very high altitude, much higher than the level of the target, so that the direction and speed of the rocket when the engine is switched off must take this height difference into account.
      • The weight of the rocket reduces and its centre of gravity changes as its fuel is used up causing its attitude and hence direction to change.
      • No directional, or range controls to compensate for external conditions are possible once the rocket has entered its ballistic trajectory.

      All of the above factors have an effect on the actual trajectory of the missile and need to be incorporated into the target setting of the missile. These factors are considered in more detail on the page about Missile Ballistics and Aerodynamics.

      See also Tsiolkovsky Rocket Propulsion Theory


      The basic V-2 aerodynamic configuration was based on the shape of the Wehrmacht "S" model bullet. Just like an arrow, fins provided longitudinal stability keeping the rocket "nose-on" into the airflow. But the attitude of the V-2 had also to be controlled when rising vertically at near zero speed, where aerodynamic surfaces are ineffective and it had to remain stable and controllable up to supersonic speeds of Mach 5 at the limits of the Earth's atmosphere. At the time there was no practical experience or available data on operations at these speeds and altitudes on which to base the designs so that extensive test programmes were required to improve the design which went through numerous iterations. Tests were carried out by Rudolf Hermann who built a supersonic wind tunnel at Peenemünde for this purpose but they sill needed to be supplemented by a comprehensive programme of flight testing to verify the results.


    • The Guidance System
    • The development of the guidance system was a slow and painful process involving in succession three different subcontractors responsible for the main development as well as academic institutions and several other companies involved in sub-systems work. All of these subcontractors however were heavily involved in other major military projects as part of the war effort and they had limited resources available for the V-2 project which was given a lower priority. There were no engineering precedents for such a system and many alternatives were explored by trial and error as part of the development. Management was complicated with engineers from multiple companies working simultaneously on alternative systems and sub-systems. Eventually, von Braun, dissatisfied with the progress, took the work in house at Peenemünde and appointed his own manager of the guidance project.


      The V-2 was launched from locations whose coordinates were known, so the azimuth and distance to the target could be determined. The direction of the missile was set towards the target by aligning fin 1 of the rocket along the desired bearing. This automatically aligned the bearing of the yaw gyro with the target direction.

      The guidance system had three design objectives:

      • Cut off the thrust of the motor at a predetermined velocity depending on the required range. Known as Brennschluss
      • Tilt the rocket so that its axis was 47° to 49° degrees to vertical at the start of its ballistic trajectory depending on the desired range.
      • Stabilise the roll and yaw to prevent the rocket wandering off the correct bearing.

      It must also function with acceleration forces of up to 8g and speeds of up to Mach 5 from sea level to the rocket's burn out height of 30 kms (20 miles).


      The initial proposal for guidance was to use radio "guide beams" transmitted from the ground, to keep the missile on course, but this was rejected, mainly because of the vulnerability to jamming by the enemy who could also home in on and knock out the radio beacons, but also because the main industry specialist manufacturers had more pressing military priorities. Instead a system to control the missile's pitch and yaw based on a gyro stabilised reference platform was adopted because it was independent of signals from the ground. Towards the end of the war however the decision was reviewed and an alternative system using radio guide beams was also developed.


      In 1934 von Braun engaged Johannes Maria Boykow technical director of Kreiselgeräte - Gyro Devices, to design a guidance system based on a stable gyroscopic inertial platform for the A3 experimental rocket which was to be the forerunner of the V-2. Boykow's initial design was only suitable for maintaining the rocket in a vertical trajectory.

      For sensing, it used two gimbal mounted, position gyros to monitor pitch and yaw but had no roll control. Two linear accelerometers in the form of little wagons moving on tracks sensed any tipping movement of the rocket to the right, or left (yaw) and backwards or forwards (pitch) and provided signals representing the acceleration in these two planes.

      Electronic analogue computers integrated these signals over time to give the corresponding speed of the deviation from vertical and the lateral deviation or displacement due to wind forces from the desired course was obtained by double integration over time.

      The system also incorporated three rate gyros mounted on the rocket body to monitor the rate of turn in any direction, pitch yaw and roll, and to provide stable signals to avoid overshoot and oscillation of the controls. (See more about Gyroscopes and Guidance)

      The guidance control system combined the signals from the inertial platform, representing deviations from the desired position, and mixed them with the rate signals for stability and fed the resulting signal to servomotors driving actuators which operated molybdenum jet vanes or rudders in the rocket's exhaust flow to execute course corrections.

      Boykow was an ideas man who left to his subordinates the task of converting these ideas into practical systems. He had planned to use an integrating accelerometer to calculate the velocity of the rocket but he died in 1935 before many of his ideas could be fully developed. However many of his original ideas were used in later systems development.

      Unfortunately his design for the A-3 and its immediate variants proved difficult to scale up and was not suitable for controlling the higher forces and speeds experienced by the V-2 (A-4). The roll controls were also inadequate allowing excessive roll to build up which led to instability and loss of the rocket. The servo control mechanisms were also not powerful enough to operate the jet vanes and the scheme for tilting the rocket into the angle needed for its ballistic flight was too abrupt and unworkable.


      In 1938 von Braun sought the help of Siemens whose engineer Karl Fieber was appointed to improve the shortcomings in Boykow's design. He developed a simpler system, known as the LEV-3, using only two electrically driven displacement gyroscopes rotating at 30,000 rpm, mounted on gimbals, the "Horizont" (mounted horizontally) with a single degree of freedom which controlled pitch, and the "Verticant" with 2 degrees of freedom (mounted vertically) which sensed both roll an yaw to provide a three axis stable platform. The gyroscopes would maintain their original orientation, no matter how the rocket moved. Displacement pick-offs on the gyro axes provided a measure of the rocket's deviation from its desired course and these signals were amplified in thyratron amplifiers and used to activate direction control mechanisms. Roll compensation was provided by electric motors which adjusted small aerodynamic trim tabs (air vanes) at the tips of the four fins.

      The main yaw and pitch control however required much higher forces to change the direction of the rocket, and these also had to operate at both low rocket speeds immediately after lift off where aerodynamic controls are inefficient as well as at very high altitudes above the Earth's atmosphere where aerodynamic controls are not possible.

      The solution was to use four jet vanes or rudders to deflect the rocket exhaust and thus change its course. Similar vanes made from Molybdenum alloy had been used in the A3 but the electric servos controlling them were not able to provide sufficient force. In the V-2, the jet vanes were activated by hydraulic power provided by pumps driven by electric motors in response to amplified control signals from the pitch and yaw gyros. The cost of the jet vanes was also reduced by making them from graphite rather than the molybdenum used in the A-3.

      Fieber's system also used the Horizont (pitch) gyro to tilt the rocket to its required angular attitude after its vertical launch. Four seconds after lift off a clockwork mechanism gradually tilted the pitch gyro by 47° to 49°, depending on the desired range, and the pitch control mechanism automatically aligned the rocket with the new gyro reference angle to set it into its maximum trajectory.


      The above two control systems oriented the rocket along a pre-determined path in a vertical plane pointed at the target but they did not control its velocity and hence its range. This was controlled by the timing of the engine cut-off. As with the directional controls, the range is set at the point of cut-off and no further adjustments are possible.

      Three different methods of determining the cut off point were tried. The simplest, but very crude, method was to fuel the rocket with exactly the correct amount of fuel so that it cut off when the fuel was exhausted. This however did not provide the precision required and two other systems, one using accelerometers and the other using radio signals from the ground were developed.


      The LEV-3 system was modified by the addition of a Pendulous Integrating Gyroscopic Accelerometer or PIGA, developed by Fritz Mueller at the Kreiselgeräte Company in 1939 (See details of How the PIGA Works). The torque sensor in the PIGA's main shaft provided an electrical signal representing the rocket's acceleration while cumulative revolutions of the shaft represented the simultaneous integration of this acceleration over time providing a signal corresponding to the rocket's velocity. Cam switches on this shaft, or on an auxiliary shaft geared to it, were used to initiate missile control sequences such as engine throttle down and shut off when the rocket had reached pre-determined velocities. The PIGA was highly accurate achieving error rates of between 1 part in 1000 to 1 part in 10000.

      The distance travelled could be also be determined by integrating the speed over time in an electronic analogue computer circuit.


      The alternative to the PIGA accelerometer system for determining the velocity of the rocket was a new Doppler system devised by Professor Wilhelm Wolman of Dresden University in 1940. It worked by sending a radio signal from a base station on the ground to a transponder in the rocket. The relative velocity between the fixed base station transmitter and the receiver in the moving rocket caused an apparent frequency shift in the received signal, proportional to the relative velocity. The received signal was then retransmitted by the rocket's transponder back to a receiver at the base station undergoing a second frequency shift on the way. By measuring this two way frequency shift, the velocity of the rocket could be determined. When the rocket reached its target velocity, a separate signal was sent to the rocket to shut off the rocket's motor.

      The Doppler cut off system was used in the first successful launch of the V-2 in 1942. Surprisingly this electronic system was not as accurate as the PIGA mechanical gyro system.


      Concerned by the slow progress of the development of the guidance system, in 1939 von Braun sought the help of the Askania company where Waldemar Möller had developed a three axis inertial system for the Luftwaffe. Unfortunately Ascania's main guidance technology had been optimised for submarine use and was not transferable to airborne missiles. Möller however showed the need for rate gyros to provide damping to improve system stability.

      Input was also requested from the Anschutz Company who manufactured autopilots and gyrocompasses.


      Later in 1939, frustrated by the continued lack of progress, von Braun eventually brought all guidance development in house and appointed Ernst Steinhoff from the University of Darmstadt as head of the Peenemünde Guidance Division with responsibility for Flight Mechanics, Ballistics, Guidance and Control and Instrumentation. Steinhoff, an enthusiastic Nazi party member, took over management control of the relevant projects of the Siemens, Askania and Anschütz companies.


      The same year Steinhoff recruited Helmut Hoelzer from Telefunken in Berlin. Like Steinhoff, Hoelzer was another Darmstadt alumnus but his studies had been interrupted when he was thrown out after getting into an argument with a Nazi student organisation. His task was to develop an alternative guidance system based on radio guide beams, a technology which had been initially rejected. It was considered that the guidance system was only active for the first 60 seconds of the flight, too short a time for the enemy to instigate electronic countermeasures. Furthermore, like all radio systems used on the V-2, the system could switch between 10 different frequencies to make jamming even more difficult. (Note that the Austrian born Hollywood film star Hedy Lamarr patented a similar frequency hopping system for controlling torpedoes in 1942.)

      Hoelzer's system, was based on an instrument landing system developed by the Lorenz company, similar in many ways to the modern aircraft instrument landing systems. It used a 3 kiloWatt, 50 megaHertz radio frequency transmitter feeding alternately with a switching frequency of 50 Hertz, two dipole antennas located 200 metres apart, 12 kilometres behind the missile launch point, sending two parallel, overlapping radio beams in the direction of the target. The line of overlap thus corresponded to the guideplane or direct trajectory towards the target. A radio receiver on the missile travelling on track would see equal signal strength from each transmitter and hence a constant amplitude signal. But if the missile diverged from the guide plane, one of the signals would be larger than the other and the receiver would see an error signal in the form of square wave whose amplitude was proportional to the lateral divergence from the desired track. The guidance control system used this error signal to activate the rudders to bring the error back to zero. By modulating the alternate transmitted signals with different audio frequency tones, it was possible to determine whether the deviation was to the left or to the right. The electronic, radio beam guidance system was potentially more accurate than the mechanical, gyro based system.

      This system, like all feedback control systems, suffered from possible overshoot and consequent instability as the momentum of the missile would keep it moving past the guide plane after the error signal had been zeroed causing an error signal in the opposite direction. In 1940 Hoelzer developed an electronic mixing system with an analogue computer to modify the received error signal and damp out these oscillations and in 1942 he built an analogue computer to calculate and simulate V-2 rocket trajectories.

      Hoelzer's analogue computer mixing device was later developed for use in guidance systems based on gyroscopic controls where it enabled stability to be maintained using position gyros only, thus eliminating the need for rate gyros.


    • Construction and Subsystems
    • The head of the Design Office and Chief Designer of the V-2 was Walter Riedel, a founder member of the team. Standing 14 m (46.1 ft) high, the V-2 weighed 12,500 kg (28,000 lb) when fully fuelled and ready for flight, carrying a payload of 1000 kg (2,200 lbs). The fuselage was originally constructed from Aluminium, but this had to be reinforced by steel bands after early models broke up in flight. Both the alcohol and Oxygen tanks were constructed from an Aluminium-Magnesium alloy.

      The layout of the warhead, the guidance equipment, the fuel tanks and the engine within the fuselage is shown in the V-2 cutaway drawings.

      A 50 volt Nickel-Iron battery provided the main electrical power and two 16 volt Nickel-Cadmium batteries powered the gyroscopes. Electronic inverters converted the DC battery power for the gyros to 500 Hertz, three phase AC which drove the rotors at 30,000 r.p.m.

      Pneumatic power to operate valves and to pressurise the fuel tanks was provided by Nitrogen gas bottles.


    • The Warhead
    • Like the V-1 the V-2 could carry a warhead of 1 ton but because frictional and shockwave surface heating of the V-2 fuselage during its high speed re-entry through the atmosphere raised its temperature to over 650° C (1200° F) it was not possible to use the high explosives employed in the V-1 because they were too sensitive to heat. Instead, 738 kg of less explosive Amatol Fp60/40 was used as part of its 975 kg payload. Nevertheless the package, combined with the impact of the 4 ton missile body hitting the target at three times the speed of sound was capable of flattening a city block.


    • Launch Sequence
    • -60 seconds - Main alcohol and Oxygen valves opened allowing 9 kg of fuel per second by gravity into combustion chamber. Igniter lit under motor

      -20 seconds - Ignition confirmed and turbopump starts rotating

      -10 seconds - Turbopump now running at maximum rpm. External power supplies switched off and internal batteries switched on. Initial thrust stage starts

      -5 seconds  - Thrust reaches 8,000 kg, and all systems working

        0 seconds  - Main thrust stage of 25,000 kg starts

      + 8 seconds  - External power supply jettisoned; all systems now running on internal power. Thrust rises to 25,000 kg;

      + 10 seconds - Lift-off; Acceleration at lift-off 1g. Trajectory timing sequence starts. Vertical axis still at 90°.

      + 14 seconds - Rocket starts inclining from vertical

      + 24 seconds - Rocket reaches the speed of sound, Mach 1

      + 35 seconds - Mach 2 reached

      + 50 seconds - Completes pre-programmed tilt of approximately 47°

      + 54 seconds - Burnout; Acceleration at burnout 7g. Turbopump stopped at altitude of 30.5 km (20 miles) and distance down track of 27.3 kms (17 miles) when rocket is travelling at Mach 5.5. The rocket now continues on ballistic trajectory for another 4 minutes, silently impacting its target at a speed of Mach 3.

      The total burn time - 65 seconds of which 55 seconds are at full power.


      See also the muzzle velocity of Germany's Great Guns.


    • The Launching System
    • Klaus Riedel designed the V-2 launching system. It was designed to operate from mobile launch sites to avoid detection by the enemy and needed over 30 support vehicles to carry the propellants, test stands, pumps, spares radio and service equipment. The rocket was brought to the site on a towed transportation frame called the Meillerwagen after the name of its manufacturer. The frame could be elevated by a hydraulic ram to hold the rocket in place during the set up, fuelling and launch. Riedel was also involved in designing the submarine version of the V-2, launched from a submersible canister. He was killed in a car crash in 1944 when he fell asleep at the wheel and ran into a tree.


    • The Production
    • Development and engineering models of the V-2 were produced on the Peenemünde site but following a major bombing raid by the Royal Air Force on 18th August 1943, just before full production was to start, it was decided that volume manufacture of the missiles would take place in a less vulnerable, underground facility near Nordhausen, in the Harz Mountains. Arthur Rudolf, Chief Production Engineer at Peenemünde, was put in charge of making the transfer. The site chosen was formed from a series of tunnels in what had previously been a gypsum mine but were now used for secure storage of fuel and chemicals. The factory was known as Mittelwerk - "Central Works" and became a place of unspeakable horror.

      Because of Germany's chronic manpower shortages as a result of the war, many factories used prisoners and concentration camp inmates as slave labour, though not for highly secret projects. But Peenemünde was so desperate for labour that by June 1943, at the request of Rudolf, they had already started using prisoners from the Buchenwald concentration camp supplied by the SS. The volume production facility at Mittelwerk however needed many more. Before production could start, the tunnels had to be extended and expanded to accommodate the huge rockets as well as the overcrowded primitive "sleeping quarters" for the workers themselves. (Calling their accommodation "living quarters" would imply that the unfortunate inmates actually had a life).

      The construction and manufacturing were carried out by prisoners from the Mittelbau-Dora concentration camp, a sub-camp of Buchenwald under the brutal supervision the psychopathic Hans Kammler of the SS. In just four months, caverns and interconnecting tunnels were excavated, manufacturing equipment was installed, workers were moved in and trained and on New Year's Eve 1943 the first prototype V-2 was delivered.

      Rudolf, a committed Nazi, was in charge of day to day production operations at Mittelwerk and by the end of the war almost 6000 rockets had been delivered, 150 - 200 from Peenemünde and 5789 from Mittelwerk, but the human cost was horrendous.

      Over a year and a half, 60,000 slave labourers from all over occupied Europe were put to work in the Mittelwerk's tunnels manufacturing weapons to be used against their families and compatriots back home, 20,000 of them died there from starvation, disease, beatings, shootings, accidents, exhaustion and collapse in the most squalid and inhuman conditions. Of these deaths, 350 of them were by hanging including 200 executed for alleged acts of sabotage.


      Because the production was started before the design was ready, the first deliveries were essentially hand made prototypes with, yet to be resolved or even discovered, structural and performance weaknesses. Von Braun's deputy, Walter 'Papa' Riedel, head of the design office was overwhelmed by the scale of the task, the shortages of materials and the volume of changes. As a result he was unfairly blamed for delivery delays and was replaced by Walther Riedel (no relation).


    • Accuracy
    • Because the V-2 was only guided for less than the first 10% of its trajectory it was significantly less accurate than the V-1 which was guided all the way to its target. The Circular Error Probability (CEP) for the V-2 was 17 kms (11 miles) compared with 13 kms (8 miles) for the V-1.

      Because of its poor accuracy, the V-2 was incapable of hitting precise military targets and from 320 kilometres (200 miles) away it could only hit a city-sized target causing random civilian casualties. In view of this known inaccuracy, the resulting civilian deaths could not be considered as what military chiefs euphemistically call "collateral damage". They were a deliberate objective.

      Since the V-2 was deployed before the development was complete, it is likely that later versions would have had improved accuracy.


  • Effectiveness of the V-2
    • The Military Value
    • There was no defence against the V-2 and, like the V-1, it had a considerable psychological effect on the civilian population. Once it was airborne it was impossible to detect and to intercept. Its ballistic trajectory enabled it to travel in silence giving no warning before impacting its target at speeds three times faster than the speed of sound. Its extreme speed also meant that it was too fast to stop with fighter planes. Its long range and mobile launching platforms also minimised the risk of casualties to the launch crews and it was not necessary to risk German pilots to make it work.

    • The Military Cost
    • The V-2 rocket was made from expensive materials and used exotic fuels at a time of serious economic scarcities. Even though it was manufactured by slave labour, it cost around the same as a high performance fighter plane it but was only good for a single sortie. It carried a warhead of less than one ton and was thus an extremely expensive way of delivering a relatively small amount of explosive to its target so that many missiles would have to be launched in order to do significant damage. This was further compounded by its inaccuracy since the probability of pin-pointing and destroying a specific target was very small requiring several attempts to make a direct hit.

      By comparison a single Lancaster Bomber had a longer range and could carry eight tons of explosives to multiple targets with greater accuracy and it could be used many times over.

      Perhaps the greatest economic cost was the opportunity cost. The V-2 programme diverted resources from other more cost effective projects. The cost of the V weapons programme relative to Germany's economy was more than the relative cost to the Allies of the Manhattan Project which produced the atomic bomb which really did change the course of the war.

    • The Death Toll
    • The V-2 has the distinction that more people were killed manufacturing it than were killed by its use. Estimates of the V-2's effectiveness as a military weapon in terms of the people killed vary by up to 50%. According to a 2011, BBC documentary, an estimated total of 9,000 civilians and military personnel ware killed by the V-2 bombardment, while 12,000 forced labourers and concentration camp prisoners were killed producing the weapons.

      Its actual performance in the field is more revealing.

      Between September 1944 and March 1945 Germany launched 3,170 V-2s at Allied cities in Europe. The list below shows the numbers targeted at each city.

      • Belgium: Antwerp 1610, Liege 27, Hasselt 13, Tournai 9, Diest 2
      • England: London 1358, Norwich 43, Ipswich 1
      • France: Lille 25, Paris 19, Tourcoing, 19, Arras 6, Cambrai 4
      • Holland: Mastricht 19
      • Germany: Remagen Bridge (The last gateway to Germany over the Rhine - Occupied by US forces) 11 (Not one hit the bridge)

      Taking Antwerp and London together, two large cities which were easy enough to hit, 1376 were killed in Antwerp, including 567 who were killed by a single V-2 impact on a crowded cinema, and 2754 were killed in London. Thus a total of 4130 were killed by 2968 rockets, a hit rate of 1.4 deaths per rocket. A very expensive killing machine.

      Inaccurate guidance systems are often given as the reason for the low hit rate, but there were other factors. Early V-2 production models had structural weaknesses which caused them to break up in flight before reaching the target. It is also claimed that mis-information put out by the British Intelligence Services who announced, incorrectly, that the rockets had over-shot their targets, caused the German artillery units who had no way of checking their accuracy, to re-target their missiles so that they fell short of London.

    • The Human Cost
    • Estimates of the number of people who died manufacturing the rockets vary between 12,000 and 25,000 depending mainly on whether the victims died at their location of work or died in other places as a result of their brutal treatment and inhuman living conditions. An estimate of 20,000 deaths generally accepted as being reasonably accurate.

      The total production of V-2 rockets before the end of the war was around 6152, but only 3170 of these were actually used in actual military operations. About 600 were used for testing, and training, some were unserviceable, others had been destroyed by launching misfires, about 250 were still at Mittelwerk awaiting delivery and the balance was stored at German military facilities on their war fronts ready for action. The human cost of producing these 6152 rockets (using the estimate of 20,000 deaths) was 3.27 deaths per rocket, or 6.31 deaths per rocket if only the rockets actually used are taken into account.

    • The Affect on the War
    • Despite the enormous scale of the effort and the technical breakthroughs in rocket technology, the V-2 did not change the course of the war. It was too inaccurate, too expensive, its payload was too small and it was not available in sufficient quantities to make a difference so that it proved to be an enormous waste of resources.

      Bombing Dresden and Tokyo with conventional bombers each caused more destruction in a single day than both the V-1 and V-2 programmes together did in a year.


  • V-2 Take-over by the SS
  • Walter Dornberger, an artilleryman and rocket enthusiast, was also a wholehearted supporter of the Third Reich which gave him sufficient influence to obtain budget approval for his extended rocket development programmes and to maintain Hitler's lukewarm support, despite the enormous costs, through their numerous setbacks and delays. Most of the delays were however not caused by incompetence or things going wrong, but were initially the result of over-selling the programme targets and later in the war due to unrealistic demands for volume deliveries before the rocket was ready for production. This was compounded by severe materials and skilled manpower shortages. Despite the urgent need to get the V-2 into production, the Peenemünde design team were expected to develop several variants of the V-2 including a submarine launched version, a winged version, the A-9 or Glider A-4 which could increase range by gliding towards its target and another winged version, the Wasserfall designed as an anti-aircraft missile as well as several alternative fuel systems. All of these needed the development of radical new technologies and resulted in diluting the effort of the V-2 team on their primary task.

    By the summer of 1943 it was clear that Germany's triumphal war offensives had been halted and tide was turning against them. They had failed to establish air superiority during the Battle of Britain in the summer of 1940, but the biggest shock was the surrender of Germany's 6th Army to the Russians at Stalingrad in February 1943 followed by the surrender of the Axis forces in North Africa to the Allies in May the same year. Now a sense of desperation began to replace the belief in the invincibility of the German military might. Salvation was sought by means of new wonder weapons such as the V-1 and particularly the V-2 whose priorities were raised from desirable to essential but production was never enough to satisfy the pressing demand and infighting between the various military commanders and ministers intensified as they attempted to gain control these prestigious new weapons, to speed up production and to enhance their political power.


    The V-2 was an Army Ordnance project but its resources were allocated by the Albert Speer the Armaments and War Production Minister who was under pressure to supply more assets such as radar and war planes to the Luftwaffe. Speer made his first moves to gain influence in December 1942 by setting up the A-4 Special Committee to oversee production of the missile and appointing Gerhard Degenkolb a fanatical Nazi, who had sorted out problems with Germany's railways and transport infrastructure, to manage it. Dornberger however remained in charge of the overall project with his long term team member Arthur Rudolf, also a committed Nazi, as head of production. Degenkolb consolidated his power by creating a Labour Supply sub-committee to provide the manpower. By the following April, Degenkolb and Rudolf proposed the use of concentration camp prisoners for producing the V-2 and by June the first batch of the slave labour workers started work on production at Peenemünde.

    After the RAF bombing of Peenemünde in August 1943, it was decided to move the production to the less vulnerable underground tunnels at Mittelwerk with Arthur Rudolf as head of production and the Peenemünde facility was downgraded to a pilot plant manufacturing units for development and test.


    Waiting in the wings was Heinrich Himmler chief of the dreaded SS, Hitler's most loyal and fanatical staff the Schutzstaffel - "Protection Squadron" who were effectively a fourth, autonomous branch of the Wehrmacht the armed forces, who were responsible for the Gestapo (the secret police) and who set up and controlled the concentration camps.

    In September 1943 Himmler appointed SS Brigadier General Hans Kammler, who had been in charge of building of the extermination camps and gas chambers at Auschwitz-Birkenau, Majdenek and Belzec, to take charge of building the production facility for the V-2, giving Himmler's SS a foothold in the V-2 missile programme. Kammler used his control of the sources of concentration camp labour to gain overall control of the production at Mittelwerk. By this time Degenkolb had fallen from grace and ended up in a mental asylum.

    In February 1944 the V-2 was still not ready for deployment and the Army were desperate to get their hands on this new weapon, Himmler saw it as an opportunity to sideline Speer. Von Braun was summoned to Himmler's headquarters where the SS chief offered him all the resources at his disposal. Though not a Nazi ideologue, in 1940 von Braun had been pressurised into joining the SS and sought advice from his superior Dornberger who advised him that he had no option but to accept if he wanted to maintain his role in rocket science. He was not an active member of the SS and rarely if ever wore the uniform. This time he was more sure of his position but the pressure was different. Sensing a takeover of the V-2 project by the SS, von Braun declined Himmler's offer, a rebuff which had its consequences.

    Later the next month (March) von Braun, Klaus Riedel and Gröttrup were arrested, together with von Braun's younger brother Magnus, by the Gestapo and accused of sabotaging the V-2 programme by not being sufficiently committed to the war effort and of using the financing of the Reich to pursue their interests in space exploration. A serious charge of treason which carried the death penalty. Riedel and Gröttrup were rocket enthusiasts noted for their open left wing liberal views. Von Braun, who had a pilot's licence and the use of a private plane, was further accused of planning to defect to England taking with him the plans for the V-2. It was only the intervention of Dornberger and Speer who argued that without these three indispensable players there would be no V-2, that they were released a few days later, duly chastened by their experience, and allowed to return to work.

    Himmler's time came once more in the following July after the abortive attempt on Hitler's life by Colonel Count von Stauffenberg. In the repercussions which followed, Himmler replaced the Army General Fritz Fromm, who had been implicated in the attempted coup, consolidating his control over the Army Ordnance and Speer's Armaments Ministry. One of his first actions was to take control of the entire V-2 project.

    On August 6th he gave Kammler complete authority of the programme in place of Dornberger, and followed up on September 2nd by giving Kammler tactical control of the missile deployment units, including targeting. On September 8th 1944, under Kammler's command, the first V-2 rockets were fired at Antwerp and London.

    Meanwhile, Dornberger who had initiated the project, built the team and nurtured it for 10 years, was relegated to the position of "Inspector of Rocket Troops and Chief of Supply and Training'


    In any large gathering there will be people from a range of backgrounds. This was also the case with Germany's rocket development team. It included Nazi extremists whose party ultimately took control but it also included engineers and technicians who were unintentionally caught up in their web.

    The technical team which was led by Wernher von Braun mostly managed to stay out of the worst of the politicking. Von Braun tends to get most of the credit for the technical achievements of the V-2 development. He was a man of vision, a brilliant engineer and a great motivator and did not rely on fear and intimidation to get things done, but he is criticised for using concentration camp labour to achieve his goals. As technical director, he had no administrative responsibility for the oversight of slave labour but he was fully aware of the horrors of the concentration camps including Buchenwald where he was personally involved in selecting skilled prisoners to work at Peenemünde and also the Mittelwerk underground factory which he had inspected. Besides this, from September 1944, von Braun's brother Magnus had been working at Mittelwerk where he was responsible for sorting out problems with the V-2 vane servomotors.

    In his defence, this occurred after the SS had taken control of the project and challenging their orders would not only have been futile, it would also be highly dangerous as he had already found out when he was arrested by them. Cooperating with the SS was necessary for self preservation. But von Braun had another option not available to others. He could have walked away from this evil regime. He had connections in high places and access to a private plane which gave him the means to flee. Instead he chose to stay. The least that can be said was that he was complicit in the actions of the SS.

    When Germany eventually surrendered unconditionaly on May 7th 1945, von Braun who had led the technical development of the V-2 and its predecessors for over ten years was still only 33 years old.


  • What Happened to the Players
  • In January 1945 as the war was coming to a conclusion and the Russians were moving ever closer to Peenemünde, Kammler ordered von Braun's team to relocate to Thuringia near Mittelwerk taking with them their equipment and the Peenemünde archive, weighing 14 tons, containing all of the available documentation about the V-2 project. Then in April, as the Allied forces advanced deeper into Germany, von Braun was further ordered to take 500 of his key technical staff to the town of Oberammergau in the Bavarian Alps, far from the front lines.

    Unable to carry the vital Peenemünde archive with him, before leaving Thuringia von Braun arranged for it to be hidden in an abandoned iron mine in the Harz mountains near Nordhausen to prevent it being destroyed by the SS and as a potential bargaining chip for negotiating his fate with the Allies. In Oberammergau the team were closely guarded by the SS who had orders to execute them if they were about to fall into enemy hands, but despite this, on May 2nd, two day's after Hitler's suicide, they managed to escape the clutches of the SS and make contact with the Americans to whom they surrendered.

    At the same time there followed a frantic rush by the Americans to gather as much V-2 information and hardware as possible from Mittelwerk before the arrival of the Russians who were ceded this territory after the Yalta Conference between the allies in February. Over the next few days a series of trains hauling a total of 341 freight cars loaded with components for about 100 rockets, machinery and equipment, together with the Peenemünde documentation left Mittelwerk, bound for the USA, ignoring a prior agreement to share these spoils 50/50 with the British.

    When the Russians eventually got there, the cupboard was bare.


    Dornberger's 500 engineers were interviewed and 127 of them including Wernher and Magnus von Braun, Rudolf Hermann, Helmut Hoelzer, Fritz Mueller, Walther Riedel, Walther Riedel, Arthur Rudolf, Martin Schilling and Ernst Steinhoff, were selected to move to the USA in what was known as Operation Paperclip. In the USA they were first accommodated at Fort Bliss near El Paso in Texas before most of them moved in 1950 to the Redstone Arsenal in Huntsville, Alabama. This group formed the core of the America's early military missile programme, and later, the civilian space exploration group NASA which eventually put a man on the moon.

    Papa Riedel, no friend of von Braun, was left behind in Thuringia where he was captured by the British. After a short detention he went to work for the British Government's Ministry of Supply (MoS) in Germany. In 1947 he emigrated to England where he worked at the Royal Aircraft Establishment, Farnborough and later at the MoS Rocket Propulsion Department in Westcott. He never worked in, or visited America.

    Hellmuth Walter was also captured by the British and taken to the UK where he worked for the Royal Navy before returning to Germany in 1948. He later emigrated to the USA.

    Dornberger was interrogated by the Americans and the British who, in the absence of Kammler (see below), were intent on holding a senior officer responsible for the atrocities in Mittelwerk and the random bombing of civilians by the V-2 rockets. However they were unable to make the charges of war crimes stick, particularly the second charge since the Allies themselves had been guilty of 'carpet bombing' German and Japanese cities, and Dornberger remained a prisoner of war (POW) for two years in the UK. On his release together with other POWs, he joined the rest of the Paperclip group in the USA.

    Gröttrup chose to stay in Germany working for the Russians who at first treated him regally, setting him up in a spacious residence near Mittelwerk where they established a rocket research institute. His task was to provide full documentation including research reports and production drawings of the V-2 technology and to re-start production with hundreds of Germans placed at his disposal. It was not to last. Once the factory was fully operational, Stalin ordered all the missile activities together with the German rocket experts (as well as nuclear physicists and experts from the aircraft industry) to be transferred to Russia. In October 1946 in a meticulously planned surprise, dawn raid, 2500 Russian security offices and army units rounded up the Germans and their families and deported them to Russia where they were to produce a Russian version of the V-2.

    Living conditions in Russia were less than ideal, the missile research labs on Gorodomlya Island, 200 miles northwest of Moscow to which they were eventually assigned were poorly equipped and there were no facilities for testing new concepts. They were physically as well as intellectually isolated. Meanwhile, as the Russians built up their own independent rocket development team led by Sergei Korolev with all the resources he could possibly need, Gröttrup's team were progressively sidelined, and given responsibility only for mundane engineering tasks. By 1951 the Russians had pumped the Germans dry of any technology they wanted and the first group of Germans was allowed to return to East Germany with most of them being repatriated by 1953.

    Gröttrup remained till the end, then made his way back to West Germany where he worked for the telecommunications company SEL (Standard Elektrik Lorenz) until 1958.

    In 1968 and 1969 jointly with German electrical engineer Jürgen Dethloff, Gröttrup filed patents for the Smart Card, or SIM (Subscriber Identity Module) Card.

    Kammler disappeared without trace in May 1945 There is speculation that he committed suicide but his body has ever been found, nor have there been any live sightings of him.

    Himmler was captured by the British in May 1945 and committed suicide by biting into a cyanide capsule while in custody.

    Speer was also captured by the British. He was the only Nazi at the Nuremberg War Crimes Trials to admit any guilt and to apologise for his misdeeds. He was sentenced to 20 years in Spandau prison which he served in full.


  • The V-2 Legacy
  • While the V-2 was failure for its intended purpose as a knockout military weapon, it was a trailblazer for space technology, pioneering rocket propulsion and guidance technology. The Allies were shocked to learn of the advanced state of the German technology and scrambled to get their hands on it. The biggest beneficiaries were the Americans who offered the key players in Dornberger's team the possibility of settling and continuing their work in the USA. As a result of the US Operation Paperclip, the dreams of the German space pioneers were at last turned into reality as they applied key V-2 technologies for rocket fuels, combustion chambers, injectors, thrust nozzles, thermal management, fuel pumping systems, cryogenics, guidance and control systems and supersonic aerodynamics first to US missile programmes and eventually to the Apollo, Saturn rockets which carried the Americans to the moon. The Redstone rockets which launched America's first astronaut Alan Shepard into sub-orbital space in May 1961, followed by Gus Grissom in July of that year, were powered by North American Aviation's version of the V2 engine with its thrust scaled up from the V2's 55,000 lbs to 75,000 lbs.


    By contrast, most of the V-2 engineers who went to Russia did not go willingly and did not have the freedom to develop their ideas. Instead the Russians set up a technology transfer programme to appropriate their ideas and then used them to develop their own independent missile and space exploration programmes. The Sputnik was inspired by German technology but developed by the Russians themselves without any direct input from the German engineers.


It is ironic that after the war the Germans and the Allies discovered that their weapons priorities had been so mismatched. The Germans had feared missile attacks and put their faith and resources into rocketry while discounting atomic weapons. The Allies on the other hand feared atomic weapons and had applied their resources to building such a capability rather than unmanned missile delivery systems.


1944 Samuel Ruben an independent inventor in the USA developed the Mercury button cell licensed to a company owned by Philip Rogers Mallory. The war years stimulated the development of new cell chemistries with water activated batteries, Silver oxide and mercuric oxide cells after over forty years with few major advances.

The Mercury cell is an aqueous system primary cell based on and Zinc and mercuric oxide. They made a major impact at the time, replacing the poorer performing Zinc Carbon cells, providing high energy density and the ability to work in harsh environments. Millions were produced by Mallory for powering "Walkie Talkie" two way radios, amongst other things, as part of the war effort. With the invention of the transistor, Mercury cells were eagerly adopted for powering hearing aids and transistor radios. Subsequently the use of Mercury in batteries has been banned by many countries because of its toxicity and Mercury cells have been replaced by other cell chemistries.


Ruben and Mallory went on to found the Duracell company.


1944 A. Brenner and G. E. Riddel discovered the possibility Electroless Plating or Electroless Deposition which they subsequently developed and patented in 1947. L. Pessel also applied for a patent for a method of plating non-metalic objects in 1944. Electroless plating uses a redox reaction to deposit metal on an object without the passage of an electric current. Because it allows a constant metal ion concentration to bathe all parts of the object, it deposits metal evenly along edges, inside holes, and over irregularly shaped objects which are difficult to plate evenly with electroplating. Electroless plating is also used to deposit a conductive surface on a nonconductive object to allow it to be electroplated. The chemical reduction process depends upon the catalytic reduction process of metallic ions in an aqueous solution containing the chemical reducing agent. The process needs a catalyst such as Platinum or Palladium to kick start it but once the process has started the catalytic action of the deposited metal is often enough to sustain the reaction. The process is extensively used to deposit metal coatings on plastic parts either for cosmetic purposes or to provide EMC shielding for electronic circuits contained within plastic housings. It is also used in the manufacture of PCBs.


1945 British born science fiction writer, Arthur C. Clarke, author of "2001: A Space Odyssey", published in "Wireless World", a radio enthusiasts' magazine, "Extra-Terrestrial Relays - Can Rocket Stations Give Worldwide Radio Coverage?". In it, he described the principle of geostationary satellites used as communications relay stations. He envisaged three manned space stations spaced 120 degrees apart in an equatorial orbit enabling communications between Earth stations on the ground.


Geostationary satellites had been proposed in 1928 by Herman Potocnik though their prime purpose was not communications but his ideas fell on stony ground. A satellite in a geostationary orbit appears stationary with respect to a fixed point on the rotating Earth, facilitating communications from a fixed, high gain antenna on the ground.

The orbit of a geosynchronous satellite is not exactly aligned with the equator, but inclined at an angle to it. When viewed from a fixed point on the ground its position will appear to oscillate North and South, daily around a fixed point in the sky. A moveable antenna following these oscillations will therefore be required to maintain optimum communications with a geosynchronous satellite.


When asked why he didn't patent the idea Clarke replied that hid didn't really expect to see it in his lifetime.


In 1964, Harold Rosen's Syncom 3 was the first communications satellite to be launched into a geostationary orbit.


See more about Communications Satellites and Satellite Orbits


1945 John R. Ragazzini at the University of Columbia demonstrated an operational amplifier (op-amp), implemented with vacuum tubes, incorporating ideas from technical aid George A. Philbrick. An op-amp is a high gain DC amplifier with a voltage gain of 100 to 100,000 or more and a very high (ideally infinite) input impedance and very low (ideally zero) output impedance. They can not only add or subtract incoming signals but can also invert, average, integrate, and otherwise manipulate them, facilities which later made them the ideal and indispensable building blocks of analogue electronic circuits.


1945 American engineer Vannevar Bush published "As We May Think" in which he described a theoretical machine which called a Memex (memory extender) which would enable users to store and retrieve documents linked by associations. Bush described an electromechanical microfilm-based machine in which any two pages from a large library of information could be linked into a trail of related information with the possibility to scroll backwards and forwards between pages in the trail as if they were on a single reel of microfilm.

This idea was the forerunner of hypertext proposed by Ted Nelson in 1965.


1945 American engineer Edwin Mattison McMillan working at the Berkeley Radiation Lab in California, constructed the first synchrotron, an oscillating field particle accelerator. It was an advance on Lawrence's cyclotron whose performance was limited by the requirement for ever greater magnets to achieve the desired high energy levels.

As in the cyclotron, the synchrotron uses cyclic excitation to accelerate the particles (electrons or ions) in a vacuum chamber and like the cyclotron, the particles in the synchrotron follow curved paths between the poles of strong magnets determined by the Lorentz force. Unlike the cyclotron however which used fixed frequency excitation of the particles and in which, the radius of the particle orbits within the "dees" increases with the particle velocity, the particle orbits in the synchrotron are at a fixed radius in a large toroidal chamber. To achieve this, the frequency of the particle acceleration in the synchrotron must increase in step with the particle velocity and at the same time, the magnetic field created by the electromagnets must also increase in unison with the increase in velocity of the particles to maintain the fixed orbital radius. The particles in the synchrotron are thus deliberately bunched together circulating in the toroidal vacuum chamber in "synchronisation" with the frequency of the accelerating voltage whereas in the cyclotron the particles tend to follow a steady stream.


Key to the success of this design is the phase stability of the orbiting particle bunch which must be locked in phase with the high frequency excitation to maintain the necessary synchronisation. If a particle has less than the normal energy, its velocity will be lower and its path will be bent into a tighter circle and, because its path will be shorter, the particle will thus complete its revolution around the synchrotron in about the same time as the faster particles. But because it is travelling more slowly it will be subject to a longer exposure to the accelerating field and will thus receive a larger energy increase. Conversely, a particle with above average energy will receive less acceleration. Consequently, the particles may execute small "phase oscillations" about a central stable phase angle.


Another difference from the cyclotron, is that the synchrotron needs an ion source in which the particles have already been accelerated to a relatively high energy state. This is because on entry into the synchrotron toroidal chamber they must already have sufficient energy to circulate at the fixed radius of the energy beam. Depending on the energy level required to circulate at the radius of the synchrotron, the energy source could be a Cockcroft-Walton generator followed by a linear accelerator or even a smaller synchrotron.

With each revolution through the synchrotron the particles pass through one or more Radio Frequency Cavities where they receive a boost in energy of between 250 MeV (Mega electron volts) to about 2.9 to 6 GeV (Giga electron volts) or more from a variable frequency microwave source.


Because the synchrotron particles orbit at a fixed radius and since bending, beam focusing and acceleration can be separated into different components and split into multiple modules arranged in the form of a torus, the design is suitable for the construction of very large-scale facilities. For more practical details see CERN's Large Hadron Collider (LHC) commissioned in 2008 which is based on the principles of the synchrotron.


McMillan published his work in 1945 but earlier the same year Russian physicist Vladimir Iosifovich Veksler had already published a paper outlining the same principles. Due to the secrecy surrounding scientific research during and after the Second World War, neither of them were aware of each other's work. Both physicists graciously acknowledged each other's contribution and they were jointly awarded the Atoms for Peace award sponsored by the Ford Motor Company in 1963.


1946 Felix Bloch, working at Stanford University, and Edward Purcell, from Harvard University, found that when certain nuclei were placed in a magnetic field they absorbed energy in the radio frequency range of the electromagnetic spectrum, and re-emitted this energy when the nuclei transferred to their original state. The strength of the magnetic field and the radio frequency matched each other, that is, the angular frequency of precession of the nuclear spins is proportional to the strength of the magnetic field. This relationship had been earlier demonstrated by Irish physicist, Sir Joseph Larmor and is known as the Larmor relationship. This phenomenon of absorption and re-emission of energy was termed Nuclear Magnetic Resonance (NMR) and the technology later formed the basis of MRI scanners. They were awarded the Nobel Prize in physics in 1952 for their discovery.


1946 Working on Radar projects with the Raytheon company, self taught engineer Percy Le Baron Spencer was testing a magnetron when he discovered that a candy bar in his pocket (so he said) had melted. Realising that the microwaves he was working with had caused it to melt, he experimented with popcorn and eggs and discovered that microwaves would cook foods more quickly than conventional ovens. By late 1946, the Raytheon Company had filed a patent for the idea and set about designing a commercial microwave oven which was eventually launched in 1947 as the Radarange. It sold for $5000 and was the size of a large refrigerator standing 165cm (5 1/2 feet) tall and weighing over 340 Kg (750 pounds). The magnetron had to be water-cooled, so plumbing installations were also required. It was another 20 years before the size and costs were reduced enough for the product to achieve commercial success.


See more about Microwave Ovens


1946 Ceramic magnets developed by Philips during World War II are introduced. Consisting of a mixture of oxides of iron and other metals these magnetic ferrites revolutionised the design of inductors, transformers, motors, loudspeakers and high power vacuum tubes.


1947 The transistor invented by Americans John Bardeen, Walter Houser Brattain (born in China) and William Bradford Shockley working at Bell labs in the USA. This invention marked the birth of the massive semiconductor industry. Because of their enormous investment in telephone plant, the Bell System was one of the last companies to use their own invention.

The trio were awarded the Nobel Prize for physics in 1956 for their work.


The reality behind workings of this famous group was a little different from that portrayed by the Bell Labs PR machine.

The pioneering work on the transistor was in fact done by close friends Bardeen, the theorist, and Brattain, the experimenter, two talented though modest researchers. Shockley was their brilliant but arrogant supervisor, a direct descendent of pilgrims who came to America on the Mayflower. He had paid little attention to their work until they made their breakthrough, whereupon, with the encouragement of Bell Labs management, he suddenly became deeply involved, especially when the credit was to be shared out. Bardeen and Brattain were actually awarded the patent for the point contact transistor (diagram) which they first demonstrated in December 1947. Built by Brattain it was made from a ribbon of gold foil around a plastic former in the shape of a triangle. At one of the points he sliced through the ribbon with a razor blade to make two connections, the emitter and the collector, spaced very closely together. This point of the triangle was placed gently down on to a base block of Germanium to which was made the third connection. By varying the voltage on the base, the current between the two other connectors could be controlled. Turning this lab model, which had all the frailties and variability of the cat's whisker, into a practical device was difficult however and point contact transistors were never widely used. One of the main obstacles to overcome was the need for very thin base layers, (of the order of 1 micron or less), to obtain low capacitance and high frequency range and at the same time to to make reliable connections to this layer. Transistors went through many design iterations and 10 years of development before settling on the planar structure.


Building on their work, Shockley first proposed a design for a field effect transistor but it fell foul of Lilienfeld's 1930 patent. Then working feverishly through January 1948 by the end of the month Shockley devised the junction transistor, a single semiconductor sandwich with three layers, which was easier to manufacture, more stable and could handle more power than the fragile point contact transistor and he was also awarded a patent for this new device. It was another two years however before the development of Teal's manufacturing techniques for growing single crystals made it possible to turn Shockley's vision into a reality.

Relations between the abrasive Shockley and his two researchers subsequently deteriorated beyond breaking point and Bardeen and Brattain both left Bell Labs. Bardeen went on to gain a second Nobel prize in 1972 for his work on superconductivity.


One key contributor, sadly overlooked in the credit shareout for this breakthrough technology, was fellow Bell researcher Russell Ohl who developed the technology of the P-N junction which made the transistor possible.


Shockley himself left Bell Labs 1955 to form his own semiconductor company, the first in Silicon Valley. Because of his technical reputation he initially attracted the most talented young men in the industry but within a year his egotistical management style drove them out. A group of these short term alumni known as the Traitorous Eight left en masse in 1957 to found the Fairchild Semiconductor company. Between them they created the foundations of the world's semiconductor industry. They included Swiss physicist Jean Hoerni, inventor of the planar process which made integrated circuits possible and Gordon Moore and Robert Noyce who went on together to found Intel in 1968. In 1962, along with Jay Last and Sheldon Roberts, two other members of the Traitorous Eight, Hoerni founded Amelco now known now as Teledyne. Then in 1964, Hoerni founded Union Carbide Electronics and in 1967 he founded Intersil. Shockley shared none of the wealth of his alumni and his company eventually folded. At the same time he became an outspoken proponent of eugenics and the notion that intelligence was genetically determined by race which tarnished his reputation and led to his eventual disgrace. Alexander Graham Bell had also been an advocate of eugenics but was more circumspect in his views and did not attract the opprobrium provoked by Shockley.

Fairchild was the breeding ground of many more semiconductor pioneers, known as the Fairchildren, who went on to found their own companies. Amongst these were Fairchild's top salesman the flamboyant Jerry Sanders who left to found AMD and Hungarian born Andrew S. Grove, Assistant Research and Development Director, who left to found Intel with Moore and Noyce.


1947 William Rae Young of Bell Laboratories later joined by Douglas Harned Ring introduced the concept of cellular communications based on the notion of hexagonal cells, each with a low power transmitter which made mobile telephones possible but the computer and switching technologies needed to make it work did not yet exist.

Prior to that, the use of portable phones was limited to the area covered by the range of the central base station on which they were registered.

It was another thirty years before the ideas of Young and Ring were implemented. (See also 1970 Cellular handoff, 1971 Practical systems and 1973 Cell phone handset)


1947 French battery manufacturer Georg Neumann develops a successful seal for the Nickel-Cadmium battery making possible a practical recombinant system in which the gases generated by the chemical reactions are recombined, rather than vented to the atmosphere, to prevent loss of electrolyte. This recombinant system, together with the benefits of low weight and volume, led to their widespread adoption of NiCads for portable applications bringing about a gradual renaissance in the use of DC power for domestic products and creating the demand for cordless appliances (later expanded and satisfied by other cell technologies).


1947 First commercial application of a piezoelectric ceramic, Barium titanate used as a gramophone needle.


1947 The hologram patented in the UK by Hungarian born refugee from Germany, Dennis Gabor. Reflection and transmission types are possible. They use a coherent light source (now provided by a laser beam) which passes through a semi reflecting plate which acts as a beam splitter to create a reference and an object beam. Light from the object beam is reflected off the object and is projected onto a photographic plate. Light from reference beam reflects off a mirror and also illuminates the photographic plate. The two beams meeting at the photographic plate create an interference pattern representing the amplitude and phase of the resultant wave which is recorded on the plate. The 3D holographic image is reconstructed by reversing the procedure. Holograms are easy to read but difficult to copy. They can be printed on labels and are used in the battery industry to provide a method of secure product identification.

Gabor received over 100 patents and was awarded the Nobel Prize for Physics in 1971.


1947 American manufacturing engineer John T. Parsons linked an IBM accounting computer with its punched card system to a milling machine and created the first numerically controlled machine for cutting two dimensional curves. He expanded on the idea in 1948 producing a three axis controller capable the cutting of three dimensional components.

Amazingly it took over 140 years before someone applied the automation techniques, used by Jacquard and others in the 18th century weaving industry, to metal machining.


1947 American engineer, Ralph Miller patented modifications to improve the efficiency of the four stroke internal combustion engine heat cycle by overlapping the valve timing. This allowed asymmetrical induction and exhaust cycles with a smaller fuel air charge followed by a relatively larger expansion (power) stroke. This achieves the same goals as the Atkinson cycle but with a less complex, lower stressed, mechanical mechanism. The penalty compared with the Otto engine was a larger stroke, and hence a larger, heavier engine to maintain the same compression ratio and power output. At the time, the industry goals were smaller higher power engines rather than fuel efficiency and Miller's innovations were ignored. Times have changed. Now fuel efficiency is more important than power to weight ratio and several automakers are working on engines using the Miller cycle.

See also Heat engines.


1947 The Bell X-1 Rocket Plane was the first aeroplane to break through the "sound barrier". Piloted by Charles E. (Chuck) Yeager it was dropped free from the bomb bay of a four-engined World War II vintage B-29 bomber flying at 250 m.p.h. over the Mojave Desert in California. Powered by a rocket engine with 6000 lbs of thrust, it reached a supersonic speed of Mach 1.06 (786 m.p.h) at an altitude of 43,000 feet.


The idea for a test aeroplane to travel faster than the speed of sound was conceived in 1944 by John Stack of the US National Advisory Committee for Aeronautics (NACA) together with Ezra Kotchner of the US Army Air Forces, and Walter Diehl of the US Navy in response to stability problems experienced in high speed flight. Subsonic and supersonic airflow over the aircraft's wings created a range of undesirable characteristics including shock waves, increased drag, severe turbulence, and loss of control effectiveness. Wind tunnel simulations were unreliable in analysing these problems since they were affected by the same aerodynamic effects and the trio realised that a specialised research aircraft offered the only feasible means of getting more accurate supersonic aeronautical data. They persuaded Bell Aerospace chief engineer Robert J. Woods to accept the challenge of building the world's first supersonic aeroplane.


Woods based the shape of the X-1 fuselage on the shape of a 0.50 calibre machine gun bullet, which was known to be stable at supersonic speeds. It was powered by a rocket engine with four chambers burning ethyl alcohol diluted with water with a liquid oxygen (LOX) oxidiser with a steam driven fuel pump, following propulsion technology pioneered in the German V2 missile. (Early prototypes used pressurised nitrogen for pumping the fuel)


Chuck Yeager, the test pilot who put his life on the line to test the plane, is forever associated with the Bell X-1 while Robert Woods, the engineer who designed it, is mostly forgotten.


1947 Following experiments at the University of Berkeley in the 1930s, American Chemist, Willard Libby, working at the University of Chicago, published a paper outlining the principles of radiocarbon dating.


Carbon occurs naturally in two stable isotopes 12C6 and 13C6, and also a radioactive isotope 14C6. Though it decays with a half life of about 5,730 years, the 14C6 isotope is constantly being replenished by cosmic rays which react with stable Nitrogen 14N7 atoms in the stratosphere and troposphere, transforming one of the Nitrogen's protons into a neutron to create the unstable 14C6 radioactive Carbon isotope. This radiocarbon isotope quickly combines with the Oxygen in the atmosphere to form Carbon dioxide (CO2) which diffuses in the atmosphere or is dissolved in the ocean, and is taken up via photosynthesis by plants which are eaten by animals so that the radiocarbon becomes distributed throughout the biosphere.

During their lifetimes, plants and animals are constantly exchanging Carbon with their surroundings, so that the Carbon they contain will have the same proportion of radiocarbon 14C6 as the atmosphere. Once the organism dies however, it ceases to acquire 14C6 from the biosphere but the radiocarbon existing within its biological material at that time will continue to decay so that the ratio of the 14C6 radiocarbon to the stable 12C6 Carbon in its remains will gradually decrease. Because the 14C6 radiocarbon decays at a known rate, the proportion of radiocarbon remaining can be used to determine how long it has been since a given sample stopped absorbing or ingesting Carbon - the older the sample, the lower the percentage of 14C6 which will be left in the Carbon content of the sample.

Libby's theory was verified by comparing the estimated age of samples from ancient artefacts of known provenance and origin, such as tree rings, with their recorded history.


See more about radiocarbon beta decay.


1948 After a period of secrecy the transistor was announced to the press on July 1. The start of a revolution in electronics was reported by "New York Times" in 4½ column inches at the end of its radio chat section on page 46.


1948 German physicists Herbert F. Mataré and Heinrich Welker working at Westinghouse in Paris applied for a patent on an amplifier which they called the transistron, the so called French transistor. It was a point contact device based on the semiconductor minority carrier injection effect which they had discovered independently of Bell Labs. Mataré had first observed transconductance effects while working on germanium duodiodes for German radar equipment during World War II. Westinghouse however concluded that there was no market for the transistron and closed their Paris lab to concentrate their resources on nuclear power engineering. Mataré returned to Germany in 1952 to found the transistor company Intermetall and Welker went to work for Siemens.


1948 Americans Gordon K. Teal and John B. Little from Bell Labs used the Czochralski (CZ) method to grow single crystals of Germanium, which began its use as a fundamental process in the manufacturing of semiconductors. The growing of large quantities of monocrystalline semiconductors by pulling the crystal from the melt was an absolute necessity for the production of high volume low cost transistors.


Despite lack of support (or even opposition) from Bell management, Teal persevered with the development of crystal growth technology and working with fellow physical chemist Morgan Sparks, he adapted the Czochralski method to allow doping of the crystals as they were pulled from the melt. Known as the grown junction technique, P and N type impurities were successively added in turn to the molten Germanium to build up the three layer NPN or PNP sandwich. The slice, or wafer, containing the layers was then cut from the crystal and then cut up again into smaller sections the desired size of the transistor. Wires were then attached to each layer and the device was encapsulated. Using this technique, in 1950 Teal successfully fabricated the first working junction transistor from a Germanium crystal two years after it had been proposed by Shockley. The frequency response of early junction transistors was unfortunately inferior to that of point-contact devices because it was difficult to grow a thin enough base region and then to attach leads to it once it was grown. For this and for commercial reasons Bell Labs held off announcing this achievement until 1951, one month after GE's announcement of the alloy junction transistor. Nevertheless the grown junction transistor became the first semiconductor device with enough predictability and dependability to be used in high volume consumer goods.


Also in early 1951 Teal, working with technician Ernest Buehler, grew the first single crystals of silicon and doped them with impurities to make solid-state diodes once more publishing the results a year later.


In 1952 Teal moved to Texas Instruments (TI) where he pioneered the use of Silicon rather than Germanium technology for semiconductor manufacturing.


1948 Shannon publishes "A Mathematical Theory of Communication", outlining what we now know as Information Theory, describing the measurement of information content of a message and the use of binary digits to represent yes-no alternatives - the fundamental basis of today's telecommunications.

Using Boltzmann's concept of entropy (a measure of uncertainty), Shannon demonstrated that decreases in uncertainty (or entropy) of the transmitted message correspond to the actual information content in the received message.

Shannon's equation for information entropy shows the same logarithmic relation that the Boltzmann equation for thermodynamic entropy does. He used this measure of information to show how many extra bits would be needed to efficiently correct for errors when the message was transmitted on a noisy channel. His definition of the entropy rate of a data source means the average number of bits per symbol needed to encode it.


Shannon also expanded on Hartley's information definitions to derive the Shannon-Hartley Law describing the maximum information carrying capacity of a communications channel (Assuming no error correction).

C ≤ B.log2(1+S/N)

or

C ≤ B.log2(1+S/B.N0)

Where

C = The maximum Channel capacity in Bits/Second

B = The Bandwidth of the communications channel in Hertz

S = The Signal power in Watts

N = The total interfering Noise power in the channel in Watts

S/N = The Signal to noise ratio

N0 = The Noise density in the communications channel in Watts per Hertz

It shows that for a given channel capacity there is a possible tradeoff between bandwidth and signal to noise ratio. By increasing the channel bandwidth, the signal to noise ratio can be reduced. This is key to spread spectrum technology.


Note that the channel capacity is influenced by the bandwidth in two ways, the first half of Shannon's equation shows a linear relationship, with the channel capacity increasing in line with with bandwidth. The second half of the relationship shows the channel signal to noise ratio decreasing as the bandwidth increases thus reducing channel capacity, however since this is a logarithmic relationship with respect to channel capacity, increasing the available channel bandwidth increases channel capacity faster than the corresponding increased background noise reduces it.


Although the relationship was derived for digital signals, similar principles (but more complicated mathematics) apply to analogue signals explaining why frequency modulation provides better signal to noise performance than amplitude modulation even though, for the same signal, FM occupies more bandwidth than AM and thus picks up more thermal/background noise.


Shannon is also credited with the introduction of the sampling theory in 1949. Based on earlier work by Nyquist, he went on to provide the mathematical proof showing that a continuous-time signal can be represented by a (uniform) discrete set of samples. The foundation of signal digitisation.


Shannon was surprisingly reclusive working alone at Bell Labs keeping his door shut. He had two hobbies which he often combined, juggling and designing with whimsical toys. His inventions included a two-seated unicycle, a unicycle with an off-centre hub, juggling machines, rocket-powered Frisbees, a motorised Pogo stick, mechanical maze-solving mice and a device that could solve the Rubik's Cube puzzle. He would emerge from his office at night to ride his unicycle down the halls while at the same time juggling three balls. In later life he was afflicted by Alzheimer's disease, and he spent his last few years in a nursing home.


1948 Patent granted to UK academic Eric Laithwaite, working at Imperial College London, for a high power linear motor. In 1965 Laithwaite outlined the principles of magnetic levitation and propulsion used in maglev trains in an IEE paper entitled "Electromagnetic Levitation". Unfortunately his ideas found little support from either the British government or British industry. He persevered however and with the support of British Rail, Birmingham City council and local industries, the Worlds's first commercial Maglev system was launched between Birmingham Airport and the Birmingham City Rail System. It was opened in 1984 and ran for 10 years. The track was only 600 metres (660 yards) long and the train covered it with a top speed of about 26 miles per hour.


The concept of a linear motor used to propel a train was originally proposed in 1902 and patented in 1905 by German inventor Alfred Zehden. In the intervening years several proposals had been put forward but until 1984 none of them had come to fruition. These included a system designed by James R. Powell and Gordon T. Danby for a magnetic levitation train propelled by linear motors based on Laithwaite's ideas for which they were granted a US patent in 1969, and a demonstration system with a 908 metre track constructed for the first International Transportation Exhibition at Hamburg in 1979 which ran for three months after the exhibition finished.

Since then several practical systems have been installed around the world including the Shanghai Maglev Train which entered service in 2004. With a top speed of 430 km/h (270 mph) it covers the distance of 30.5 kilometres (19.0 miles) between Shanghai Pudong International Airport and the city in 8 minutes, including 3 stops.


1948 Swiss engineer, Robert Durrer developed a refined version of the Bessemer converter called Basic Oxygen Steelmaking (BOS) which replaced the blowing of air with blowing Oxygen. It was ten times faster than the open hearth process and produced high quality steel with a lower capital and labour cost.


See also Iron and Steel Making


1949 The patent for the barcode system, using a few thick and thin stripes to uniquely identify millions of individual items, was filed in the USA by recently graduated students Norman J. Woodland and Bernard Silver. Inspired by the Morse code, it was intended as a system for automatically reading product information during checkout in retail outlets. The patent which included both linear and bulls eye printing patterns, as well as the means needed to read the code was eventually granted in 1952. In the meantime, Woodward had taken up employment with IBM in 1951 hoping to interest them in commercialising the system. While they found the idea interesting, at that time IBM did not have suitable low cost data processing systems suitable for high volume transactions in retail stores, nor was there a manifest demand for the system, so they did not take up the offer. They did however offer to buy the patent but this was turned down.


Adoption of the barcode system was disappointingly slow. Reliable printing and scanning equipment had to be developed and coding standards had to be proposed and accepted by all the potential users. But even after these problems had been solved, success depended on a critical mass of retailers installing expensive scanners while manufacturers simultaneously had to agree to attach barcode labels to their products and neither wanted to move first.


An essential requirement of the barcode system was the method of providing unique identification of millions of different products ranging from food, clothing and pharmaceuticals to hardware and electrical goods each with different attributes such as sizes, weights, quantities and colours. Furthermore, for practical implementation, the encoding system needed to be standardised and universally accepted, not just by the retailers but also by all of the manufacturers and suppliers of the products. A system based on Morse code alone could not reasonably provide such capacity and the Universal Product Code (UPC) was developed by IBM for this purpose. In 1973 the IBM UPC was selected by the National Association of Food Chains (NAFC) as their standard and since then numerous variants have been developed for special applications.


The most common code is the UPC-A barcode which consists of 12 numerical digits (0 to 9) providing one trillion (10^12) unique numbers. Each digit is represented by a unique optical pattern of two bars and two spaces, each of which may have one of four possible different widths called modules. The total width of the bars and spaces making up a digit is always seven modules and the seven modules spread across two bars and two spaces provide 10 possible combinations or digits. Thus to represent the 12 digits the UPC-A code requires a total of 7×12 = 84 modules.

The barcode's 12 numerical digits are sufficient to identify the product manufacturer and the type of product at the point of sale, but they are not enough to record globally uniique serial numbers or additional information such as price, "sell by date", specification and product contents or ingredients. If however the barcode's digits are used as an index to a connected database, there is effectively no limit to the amount of data which can be stored.


In addition to the codes representing the numerical digits, there are three more pairs of bars and spaces or non-numerical identifiers. The first pair at the start of the code indicates the particular number system to be used by the following digits. The last pair at the end of the code string is an error check digit designed to detect errors in scanning or manual data entry. The third pair in the middle of the code string is used as a timing reference. The ordering of the bars and spaces is used to detect the direction of the scan.

The code string is therefore made up from 30 bars and 30 spaces. There is also an empty "quiet zone" at each end of the code.


It was 1974 before the barcode system was first used in retail store checkouts. Early trials showed that the sales increased by up to 12% with the introduction of barcode scanning and inventory management was also improved. After the results became known the system rapidly gained acceptance.


Barcodes are now used for many more diverse applications. In the manufacturing industry they are used to record data such as product identification, date of manufacture and serial number to simplify inventory management and to facilitate traceability of suspect products. In the healthcare and security industries they are used for personal identification and in transport services they are used for property identification.


See also RFID Tags


1949 American engineer Jay Wright Forrester working at MIT, invented the magnetic core random access memory (RAM) replacing vacuum tubes (valves), whose state could be "On" or "Off", with tiny annular magnetic cores whose state could be "magnetised" or "not magnetised". His invention was actually preceded by a design by Wang working at Harvard, whose patent was not awarded until 1955.


1949 Dip soldering invented by Stanislaus (Stan) Francis Danko and Moe Abramson of US Army Signal Corps, six years after the invention of the printed circuit board (PCB).


1950 Shockley published his classic text on transistors, Electrons and Holes in Semiconductors. Even into the 1950s the theory and structure of semiconductors was not well understood and it took experiments using cyclotron resonance to reveal the nature of electrons and holes in Silicon and Germanium.


1950 Shockley first described the idea for a four layer PNPN transistor which he referred to as a bipolar transistor with a P-N hook-collector. The mechanism for the operation of the device was analysed further in 1952 by Jewell James Ebers another Bell Labs physicist and in 1956 Bell Labs engineers John Louis Moll, Morris Tanenbaum, J. M. Goldey, and Nick Holonyak, Jr., published a paper outlining its use as a transistor switch.


The device was however first turned into a practical product, the silicon controlled rectifier (SCR) or thyristor (from the Greek "thyr" - door or gate) in 1956 by Gordon Hall at G.E.


Shockley on the other hand left Bell Labs in 1955 to pursue its application as a four layer diode, which he named after himself, in his own company Shockley Semiconductor Laboratory.


1950 Jun-ichi Nishizawa working at Tokyo University invented the PIN diode.


1950 The Swiss firm Oerlikon developed the so-called gyrobus, a flywheel powered electric bus first used in Yverdon, Switzerland in 1953. The flywheel battery incorporated an electric motor which was used to re-charge the flywheel at bus stops (Opportunity charging).


1950 as a result of work carried out in the 1940s, John Dreyer working at the Marconi Laboratories in England patented a method for orienting dye molecules in liquid crystals to make polarisers. See also Reinitzer (1888), Heilmeier (1968), Fergason (1969) and Gray (1970).


1950 American physicist and statistician W. Edwards Deming invited to Japan by Japanese business leaders to teach American methods of statistical analysis, quality control and process improvement. He stayed for many years and with the Japanese and others brought a completely different focus on the conception of quality - it was concerned with methods for improvement, striving to do better, not control and conformance. Building on Japanese working practices of constant improvement, teamwork, responsibilities and setting high standards he developed the concept of Total Quality Management (TQM). The results were so spectacular that Deming was credited with being a major influence in the success of Japan's post war economic recovery and the Japanese were invited back to the West to explain their methods to western business leaders.


1950 The modern gas-liquid chromatography technique was invented by British chemist Archer John Porter Martin. Based on principles first proposed by Tswett, a sample of gaseous or liquid to be analysed is injected into a long tube together with a carrier gas or liquid which sweeps the samples through the tube. This motion of the sample molecules is inhibited by the adsorption either onto the tube walls or onto packing materials in the tube. The rate at which the molecules progress along the tube depends on how quickly they are absorbed and this in turn depends on both the type of molecule in the sample and on the absorbent materials. Since each type of molecule has a different rate of progression, the various components of the sample are separated as they progress along the tube and reach the end of the tube at different times. A detector monitors the outlet stream from the tube. The time for each component to reach the outlet is unique for the particular component in the sample allowing it to be identified. The amount of that component can also be determined. Generally, substances are identified by the order in which they emerge from the column and by the time required for the sample to pass through the tube. The gas-liquid chromatograph is a basic laboratory tool for analysing the chemical materials used in energy cell manufacturing.

Martin who had invented a series of chemical analysis machines over the years was awarded the Nobel Prize for chemistry in 1952.


1951 The first computer programming textbook published: "The Preparation of Programs for an Electronic Digital Computer" by Maurice V. Wilkes, David J. Wheeler, and Stanley Gill, the pioneering software team who developed assembly language programming for Cambridge University's EDSAC (Electronic Delay Storage Automatic Calculator) computer, which they completed in 1949. The book outlined assembly language programming, writing machine instructions in mnemonic form using symbolic instruction code and an assembler to convert the mnemonics into into binary machine code instructions which the computer could understand. It also introduced the notions of reusable code with subroutines and libraries. Assembly language was an essential step on the road to the high level programming languages we use today.


1951 Grace Murray Hopper of Remington Rand, invents the modern concept of the compiler. A compiler is a computer program which translates source code written in a high level language to object code or machine language that may be directly executed by a computer or microprocessor. It allows programs written in high level languages to be run on different machines.

As an example, the software used to run an embedded system such as that used in a battery management systems (BMS) will normally be developed off line on a PC or other general purpose computer using a high level language such as "C". This source code will then be compiled into machine code which will run on a dedicated microprocessor in the BMS. This object code is downloaded into, and stored in, the BMS memory and runs when the system is switched on or when it is called to do so by external inputs.


Grace Hopper worked with Aiken on the Harvard Mark1 computer and with Eckert and Mauchly on the ENIAC and in the US navy she rose to the rank of Rear Admiral. In 1945 when she found a moth between the contacts of a relay in an early computer, causing a malfunction, she coined the word "bug" for a computer fault, and the word "debugging" when the insect was removed.


1951 American engineer John Saby working for GE made the first alloy junction transistor (diagram) one month before Bell's announcement of their grown junction transistors. In the alloy junction process two small pellets of P type impurities were placed on opposite side of a thin disc of N type Germanium and heated till the pellets melted into the Germanium fusing into alloyed regions within the Germanium base. The heating stopped short of melting right through the Germanium leaving a narrow N type base layer between the P type emitter and collector. Wires were then connected to the three regions.


At Philco young engineer Clare Thornton found a way to improve on the alloy transistor in 1953 by using a jet etching technique to create thinner more controllable base sections. The Germanium base was electro-chemically etched on opposite sides by a jet of chemical etchant with an electrical bias until the Germanium reached the desired thickness (translucent to visible light) when the etching was stopped. The emitter and collector pellets were placed in the depressions created and alloyed as normal (diagram). This enabled them to produce transistors which could operate with reasonable gains beyond 30 MHz. Devices made this way were called surface barrier transistors.


1951 Automated Assembly introduced by Ford Motor Company for producing engines. Actually more than just assembly, it was fully automated multi-stage production line with engine blocks positioned, machined and transferred to the next stage in a sequence of custom automatic machining operations. Modern low cost computers and pneumatic systems allow these production automation techniques to be used today for relatively low volume production.


1951 Philip Edwin Ohmart of Cincinnati, Ohio, invents the first nuclear battery which converts radioactive energy directly to electrical energy. It consisted of two electrochemically dissimilar electrodes separated by a filling gas which was ionised by exposure to nuclear radiation to produce the electrical current. Ohmart obtained an emf efficiency of 0.01% on a cell using Magnesium dioxide and Lead-dioxide with Argon as the gas and Ag110 as the radioactive source. The idea was later used by Sampson and Brown in their respective Gamma and Beta batteries.


1951 On December 20th the world's first nuclear powered electricity generating station Experimental Breeder Reactor EBR 1, a pilot plant in Arco, Idaho, came on stream powering four 200 Watt lightbulbs. The following day the power output was ramped up to 100KW, enough to power the whole plant.

Nuclear power plants use a variety of fuels, moderators, coolants and reactor designs all of which are very complex but the reactors themselves do not generate electricity directly. They are simply used as nuclear boilers to heat water, raising steam to drive conventional turbine generators, a crude but controllable (safe) way of harnessing nuclear energy.

In 1953 Arco, population 1000, becomes the first community to be powered by a nuclear reactor. In subsequent years the use of nuclear power spread as other countries followed suit.

  • 1954 Obninsk, USSR, 5MW capacity
  • 1956 Calder Hall, UK, 50MW capacity
  • 1956 Marcoule, France, 5MW capacity
  • 1957 Shippingport, Pennsylvania, USA, 90MW capacity
  • 1962 Rolphton, Canada, 20MW capacity

In 2005 there are currently 439 nuclear power plants generating 16% of the world's electricity with 25 more plants under construction and over 100 more planned or proposed.


See more at Nuclear Energy - The Theory and Nuclear Energy - The Practice


1951 Russian nuclear physicists Igor Yevgenyevich Tamm and Andrei Dmitriyevich Sakharov working at the Kurchatov Institute of Nuclear Fusion proposed a method of generating nuclear power by means of controlled thermonuclear fusion by bringing gases together in an extremely hot ionised plasma in a toroidal-shaped (doughnut) magnetic bottle, known as a Tokamak device. The name is an acronym of the Russian words for Toroidal Chamber Machine. The magnetic forces acting on the moving charges of the plasma keep the hot plasma from touching the walls of the chamber and the current that generates the field is induced in the plasma itself serving also to heat the plasma. The fusion fuel is different isotopes of hydrogen which must be heated to extreme temperatures of some 100 million degrees Celsius, and must be kept dense enough, and confined for long enough (at least one second) to trigger the energy release.

To date, the highest temperature produced in a laboratory was in 1994 when a plasma temperature of 510 million degrees Celsius (918,000,000 deg F) was recorded in the Tokamak Reactor operated at the Princeton Plasma Physics Laboratory in the USA. Despite achieving these very high temperatures it has still not been possible to create controlled self-sustaining thermonuclear fusion.


And some people thought they could achieve cold nuclear fusion in a beaker of heavy water.


Sakharov is possibly better known for his outspoken campaigning against nuclear proliferation and for human rights for which he was banished and kept under police surveillance.


1952 A patent was issued to the Lip Watch Company of France for the first electric watch. The design was born out of a cooperative deal agreed in 1949 between Fred Lip and the Elgin Company of the USA.


The first electric watches still used a conventional gear train with a balance wheel oscillator for timekeeping, but instead of power being derived from a mainspring, the impulses driving balance wheel were generated electromagnetically with the energy provided by the battery. The design turned orthodox watch design wisdom on its head. Instead of the sensitive timekeeping oscillator being isolated from the mainspring by the escapement which provided controlled low power pulses to the balance wheel, pulses from a solenoid acting on a magnet drove the balance wheel directly and this in turn drove an index / escape wheel connected through the gear train to turn the pointers. A reversal of the conventional power flow with much greater forces on the balance wheel.

In this way the battery provided constant timing impulses and the mainspring, with its typical variability, was eliminated and the watch never needed winding.


See more about The Lip Oscillator and How it Works.


Performance

Because the timekeeping was still regulated by a balance wheel, the electrical watch was no more accurate than a good mechanical watch,- typically a gain or loss of around 10 seconds per day.

Low battery life was also a problem. The original design used bean-shaped batteries, designed and made in house, which proved to be unreliable and success was only made possible by replacing them with Ruben's recently developed button cells made by Mallory. (Later models used a single battery.)

The design of the switching contacts also proved to be problematical and It was not until 1958 that all the bugs had been ironed out and the first product, model R 27, was launched. This model incorporated a diode to reduce contact arcing and on this basis it was called an electronic watch rather than an ordinary electric or electro-mechanical watch.


Lip were however were beaten to the punch by the Hamilton Watch Company who launched a similar system in 1957 almost two years before Lip's 1958 commercial launch.

Note the Hamilton design used fixed magnets and a moving coil to provide the oscillator impulses, whereas the Lip system used fixed coils and moving magnets. Both systems suffered from short battery life and reliability problems with the fine wire switching contacts.


See also the Bulova Electronic Watch.


1952 10,000 transistors are manufactured world wide, mostly for government and research.


1952 American engineer William G Pfann working at Bell Labs invented the zone refining process for purifying Germanium to a level of one part in 1010 (a single grain of salt in a bag of flour would be impure by comparison). It depends on the fact that the melt of a crystalline material will sustain a higher level of impurities than the crystal itself. The Pfann process involved localised melting by induction, or other heating of the Germanium ingot supported in a graphite boat inside a tube. By moving the heater along the tube the molten zone passes down the ingot melting the impure solid at its forward edge and leaving a wake of purer material solidified behind it. In this way the impurities concentrate in the melt, and with each pass are moved to one end of the ingot. After multiple passes the impure end of the ingot is cut off.


Pfann was apparently unaware that a single pass purification technique had been proposed in a paper published in 1928 by Russian physicist Pyotr Kapitza working for Rutherford at Cambridge University.


Unfortunately, while Pfann's method worked fine for refining Germanium with a melting point of 937ºC, it did not work for Silicon whose melting point is 1415ºC because no suitable boat material could be found to withstand the high temperatures without contaminating the melt. The problem was solved in 1953 by Bell Labs metallurgist Henry C. Theuerer with the development of the floating zone method. He was able to create a molten zone in Silicon by holding the ingot in a vertical position and moving it relative to the heating element. In this vertical configuration the surface tension of the molten silicon was sufficient to keep it from coming apart.


One of the earliest commercial products to evolve from the development of single-crystal Silicon was the Zener diode named after its American inventor, Bell Labs physicist Clarence Melvin Zener. The Zener diode was the first solid state voltage regulating element.


1952 Bell Labs researcher Calvin S. Fuller, working initially with Germanium, published studies showing that donor and acceptor atoms could be introduced into to shallow, controlled depths in the Germanium crystal by diffusion. The method involved exposing the semiconductor to the dopants in high temperature vapour form. The depth of penetration of the dopant was controlled by the temperature and allowed the production of much thinner base layers and a corresponding improvement in frequency range of the device. The diffusion method was an essential milestone in the development of the integrated circuit.


1952 Gallium Arsenide (GaAs) identified as a semiconductor by Heinrich Welker by now working at Siemens in West Germany.


1952 Japanese researcher Kenichi Fukui working at Kyoto University published "A molecular theory of reactivity in aromatic hydrocarbons" in which he developed a model describing the energy bands of molecules, similar to Bohr's model which described the energy bands of atoms. He showed that the probablity that the positions of the electrons in the molecule, known as molecular orbitals, are not just influenced by the individual atoms with which they are associated, but also by other neighbouring atoms or groups of atoms within the molecule. Depending on the structure of the molecule, molecular orbitals may take on different cloud like shapes such as spheres, barbells and tripods which represent the probability distributions of the possible electron positions. Later known as Frontier Molecular Orbital Theory it identified different energy levels for electrons within the molecule as the Highest Occupied Molecular Orbital (HOMO) and the Lowest Unoccupied Molecular Orbital (LUMO) and their influence on reaction mechanisms which are analogous to the properties of the valence band and the conduction band in Bohr's theory.


Fukui's theory indicated the potential existence of a band gap, or work-function, in organic materials which could allow the possibility of charge hopping between molecules of suitable conductive polymers and eventually the possibility of constructing organic semiconducting devices such as diodes and transistors.


In 1981 Fukui was jointly awarded, with American Roald Hoffman who worked independently on the topic, the Nobel prize for chemistry for his work on molecular chemical actions, the first Asian scientist to be so honoured.


See also Energy Bands and Molecular Orbitals.


1952 English radar engineer Geoffrey W.A. Dummer working for the Royal Radar Establishment of the British Ministry of Defence recognised that if circuit elements such as resistors, capacitors, distributed capacitors and transistors were all made of similar materials they could be included in a single chip. In a paper entitled "Electronic Components in Great Britain", he was the first to outline the concept of the integrated circuit as "electronic equipment in a solid block with no connecting wires". In 1956 Dummer placed a contract with Plessey to build an integrated circuit but they were unsuccessful since the only fabrication method available to them at the time was the unsuitable grown junction technique.


1952 The English engineer Sir Charles Oatley invented the scanning electron microscope (SEM) in its present form. In contrast to the STM in which the electron beam is detected after passing through the specimen, the electron beam in the SEM scans the surface of the specimen and detects the electrons which are scattered back from the sample using them to build up an image. This enables it to show three dimensional images of surface texture. This capability makes it more useful than the TEM for many metallurgical applications and an essential tool in the cell designer's armoury. See also TEM and STM.


1952 English physiologist and biophysicist Alan Lloyd Hodgkin and Andrew Fielding Huxley member of the distinguished Huxley family of biologists and authors working at Cambridge with Australian research physiologist John Carew Eccles, discovered the chemical means by which nerve impulses are communicated through the body by the excitation or inhibition of nerve cell membranes. They experimented with living cells but because the neurons (or axons which carry the nerve impulses) of almost any other cells were too small to study using the techniques available at the time, they used the giant axons of the nerve cells of the Atlantic squid, Loligo pealei, which are over 100 times the size of human axons. By planting tiny electrodes in the cells they were able to record the ionic currents resulting from nerve impulses. They showed that the nerve impulses cause a temporary contraction of the muscle cell, expanding the minute pores in the cell membranes which allow the interchange of potassium ions in the cell with sodium ions from outside of the cell. The change in the concentrations of the ions in the cell effectively changes the electric potential difference across the cell membrane and at the same time this activates a similar action in the adjacent cell. In this way the nerve signal is passed chemically along the nerve fibre permitting transmission of the original impulse through the body, not electrically as had been du Bois-Reymond's accepted theory for over 100 years. They called this the Sodium pump method of transmission.

They were awarded the 1963 Nobel Prize for Physiology or Medicine for their work.


1953 Microtone, Maico, Unex and Radioear, all in the USA introduce the first commercially available consumer product to use transistors. This was the hearing aid powered by a mercury button cell.


1953 Intermetall (Germany) demonstrated at the Düsseldorf Radio Fair, a solid-state radio receiver using four transistrons designed by German physicists Herbert F. Mataré and Heinrich Welker while working at Westinghouse in France.


1953 American engineer Andrew F. Kay, at his company Non Linear Systems, invented the digital voltmeter (DVM) which offered 0.01% accuracy, an order of magnitude better than analogue instruments at the time, paving the way for digital readout instruments.

In 1982 he changed the name and mission of the company to Kaypro and launched the ill fated Kaypro II computer.


1953 Polycarbonate plastic material discovered accidentally by Daniel W. Fox at GE Labs in the USA while working on a project to develop a new wire insulation material. Almost indestructible it finds use in products ranging from cell phone and battery casings to CDs and bullet proof glazing.


1954 ABS polymers were introduced to commercial markets by the Borg-Warner Corporation who had patented the thermoplastic product in 1948. A wide spectrum of ABS plastics can be produced by varying the proportions of the three 3 constituent monomers - Acrylonitrile, Butadiene and Styrene, with properties tailored to meet specific requirements. In addition to this great versatility, ABS plastics in general are distinguished by great toughness and high impact strength (even at low temperatures), good dielectric properties and excellent dimensional stability. To this is added extremely fine gloss appearance, very wide colouring and surface texturing possibilities which make it ideal for use in both consumer and technical products. It is the material of choice for most small battery pack housings.


1954 Introduction of Styrofoam in the USA. Like so many inventions, Styrofoam, or expanded polystyrene was discovered accidentally. Ray McIntire working at Dow Chemical was trying to find a flexible rubber-like polymer for use as an electrical insulator using polystyrene by combining styrene with isobutylene, a volatile liquid, under pressure. The result was a foam polystyrene with bubbles 30 times lighter than basic polystyrene. Now widely used for both packaging and insulation.


1954 Patent granted to Shockley for the use of ion implantation for selective doping of semiconductor materials.It used an ion accelerator to create a beam of energised ions of dopant atoms with sufficient energy (20 to 200keV) to penetrate into the crystal lattice. This enabled penetrations of 0.1 to 1.0 µm and the placement of precise amounts of dopants in controlled locations in the semiconductor substrate, however at the same time it also caused collateral damage to the crystal structure which had to be repaired by an subsequent annealing process. The patent was the culmination of work started in 1949 with Ohl. Although this method of doping allowed more precise control of the semiconductor properties than the diffusion method of doping and is used almost universally today, it did not take off at the time because the ion implantation equipment was very expensive and the newly developed diffusion method was simpler and more cost effective. It wasn't until the 1967 introduction by Bower of the IGFET whose manufacturing depended on ion implantation, and the availability of less costly ion accelerators, (typically $3 million or more) that the technique began gain acceptance.


Shockley's patent expired in 1974 and he derived few royalties from his invention.


1954 Using Fuller's diffusion process, Charles A. Lee at Bell Labs made the first diffused base Germanium mesa transistor (diagram). This device had a cutoff frequency of 500MHz, a factor of ten faster than the best alloy transistors of the time.

The mesa transistor gets its name from resemblance of the built up layers or structures made by the base and the emitter protruding from the surface of the wafer, to the geographical formations in Monument Valley USA. (Spanish mesa: table).


Early mesa transistors were made by diffusing the base layer dopants into a wafer of collector material. Then a patch of inert material, usually a wax, was applied to the doped surface where the emitter was intended to be and a strong acid was used to etch away the semiconductor, including the doping, from around the patch leaving a flat topped protrusion on the wafer. After removal of the patch, the surface was cleaned to reveal the base to which the emitter material was then alloyed.

Apart from the difficult manufacturing process, the mesa transistor however had other drawbacks in that the semiconductor junctions were exposed to both contamination and physical damage leading to unpredictable performance. These problems were eventually overcome by Hoerni's planar process.


1954 Was the year of the Silicon transistor with breakthroughs from both Bell Labs and Texas Instruments. Germanium transistors suffered performance limitations which made it unsuitable for military applications and had still not made much impact in the consumer marketplace. Although it was more difficult to work with, Silicon was much cheaper than Germanium and allowed higher operating temperatures and higher power outputs which were important to the military who funded much of the US semiconductor research and development and it quickly replaced Germanium as the basic material for transistor production.


1954 Daryl Chapin, Calvin Fuller, and Gerald Pearson working at the Bell Labs demonstrated the first practical photovoltaic solar cell that could generate useful power using Silicon rather than Selenium. By diffusing a thin layer of P type Boron atoms into a wafer of N type Silicon they constructed large area p-n junctions which generated substantial current when sunlight fell on them, achieving conversion efficiencies of 6% compared with the 1% possible for the last eighty years with Fritts' Selenium cells. By the late 1980s efficiencies of over 20% were being achieved with Silicon and Gallium Arsenide cells. In 1989 efficiencies of 37% were achieved by Boeing using lenses to concentrate the sunlight.


Although it may seem counter intuitive, large scale electrical power generation from solar energy is still more efficient and less expensive by using the captured thermal energy in an intermediate step to raise steam to drive turbine generators than direct energy conversion in photovoltaic cells.


1954 Also at Bell Labs, Morris Tanenbaum duplicated Lee's diffused Germanium device in Silicon to make the first diffused base Silicon transistor but the company kept this achievement under wraps. They didn't patent it because others had developed similar processes and they decided "From a manufacturing point of view, it just didn't look attractive". As a consequence they did not at the time put in place any manufacturing facility to support this new technology.


1954 The mighty Bell Labs with their superior diffusion technology were upstaged by an upstart company from Texas who announced the first successful Silicon transistor a grown junction device developed by Willis Adcock working with Bell alumnus Gordon Teal who had left Bell to work at Texas Instruments (TI) taking with him the know how he had developed at Bell Labs. The demand for this new high performance device was unprecedented, particularly from the military, but with no product availability from Bell, Texas Instruments was suddenly thrust into the big league and Teal's Silicon transistor was the spark that turned TI into the company it is today.


1954 Following the first demonstration of a solid state radio by Intermetall in 1953, the first "transistorised" high volume electronic product aimed at the consumer market, the Regency TR-1 AM transistor radio was launched in the USA. Designed by Richard C. Koch it was a superheterodyne receiver using just four Germanium transistors from Texas Instruments, powered by a "standard" 22.5 volt battery originally intended for tube-type hearing aids. Unfortunately it did not achieve commercial success.


1954 The first high level programming language, Fortran (Formula Translation) was invented by John Backus at IBM and released in 1957. Recognised as the forerunner of today's software applications. But see Plankalkül.


1954 George C. Devol, an inventor from Louisville, Kentucky, designed the first programmable industrial robots. In 1956, with Joseph F. Engleberger, a businessman/engineer whom he met over cocktails to discuss the writings of Isaac Asimov he founded Unimation, the world's first robotics company. The first machines were programmable transfer machines, or pick and place machines whose main use was to transfer objects from one point to another. The first industrial application was by General Motors who in 1961 used the robots for moving and positioning heavy castings on heated die casting machines. Although they were introduced as labour saving devices, one of their main virtues is the precise control, reliability and repeatability of their actions which permitted consistent, high quality manufacturing processes. They are used extensively in battery assembly operations and are essential tools for achieving high levels of product quality and reliability.


1955 Willard Thomas Grubb, a chemist working for General Electric (GE) in the USA developed the first Proton Exchange Membrane (PEM) fuel cell.


1955 Reynold B. Johnson and his team working at IBM produced the first ever working hard disk drive. The RAMAC, (Random Access Method of Accounting Control) as it was called, was very large, weighing in at one ton. Data was stored on fifty 24 inch magnetic disks coated on both sides with magnetic iron oxide and rotating at 1200 RPM on a single shaft. It used vacuum tube control electronics and a single read/write head to read data from each of the 24 platters giving a file access time averaging at about one second. Its storage capacity was 5 million characters (less than 5 megabytes since they were 7 bit not 8 bit characters), roughly the equivalent of only one song on a modern iPod. It was not until 1961 that separate heads for each platter were used. Today, all hard disk drives are based on Johnson's basic system.


1955 The USS Nautilus, the world's first nuclear powered submarine "Under way on nuclear power". A ship with a crew of 105 men, 98 metres (324 feet) long, displacing 3533 tons on the surface and 4092 tons submerged it could stay submerged for weeks and cover vast distances at high speed. It was powered by a lump of enriched Uranium the size of a golf ball which could keep it fully operational for several years without refueling. (One pound of highly enriched Uranium as used to power a nuclear submarine or nuclear aircraft carrier contains about the same energy as a million gallons of petrol/gasoline.)


1955 The first successful Caesium Atomic Clock was built by physicists Louis Essen and Jack Parry at the National Physical Laboratory (NPL) in the UK. It kept time to one second in 300 years or just under 1 part in 1010.

The idea that atomic beam magnetic resonance might be used as the basis of a clock was first suggested in 1945 by Isidor Rabi, a physics professor at Columbia University. The suggestion was taken up by the U.S. National Bureau of Standards (NBS - now National Institute of Standards and Technology NIST) who produced a "proof of concept" model in 1949 using the ammonia molecule as the source of vibrations. Unfortunately its accuracy of 5 parts in 107 was less than the 2 parts in 108 of contemporary laboratory quartz clocks.

Since 1955 many variants have been produced using other elements including Strontium, Hydrogen, Thallium, Rubidium and Ytterbium with ever increasing accuracies.


See more about The Atomic Clock and How it Works


In 1967, the General Conference on Weights and Measures (GCWM) defined the International Standard (SI) "second" as the duration of 9,192,631,770 cycles of radiation corresponding to the transition between two energy levels of the Caesium-133 atom. This meant that the world's time keeping was longer based on the motion of the Earth which was not reliable enough. Since then a network of atomic clocks synchronised to an accuracy of 10-8 seconds (10 nanoseconds) per day (approximately 1 part in 1014) has been built up by national standards agencies in many countries. The accuracy of the NPL's latest (2014) Caesium clock is 2.3 parts in 1016 or 1 part in 130 million years.

Atomic clocks are used extensively as frequency standards to synchronise signals in communications networks. They also provide the timing references on board the GPS satellites, with each satellite carrying 4 Rubidium atomic clocks synchronised with eachother and stable to within 1 part in 1012. (The first GPS satellites used 2 Caesium clocks and 2 Rubidium clocks).


1955 Battery manufacturers (if not carpet makers) have a lot for which to thank Zenith Radio Corporation engineer Eugene Polley. Challenged by Commander Eugene F. McDonald Jr. the company's founder, to develop a device to "tune out annoying commercials", he created the first wireless TV remote control, the Flash-matic which used a flashlight to activate photocells on the TV set.

The following year, Austrian born Robert Adler, another Zenith engineer, invented an ultrasonic version of the device dubbed the "Space Command". It used ultrasonic tones to actuate stepper motors in the TV set to turn it on and off, to change the channel, and to adjust the volume.

Now sold in their millions each year and all needing batteries, today's products use infra red or radio transmission.


The remote control started a general trend for more portability with users no longer being satisfied with being tied to the electrical mains or telephone socket outlets.


1955 Patent granted to Chinese-born US computer engineer An Wang for the invention of the magnetic core computer memory. Actually Wang's design predates that of Forrester who is also credited with the invention. Core memory was built up from tiny rings of ferro-magnetic material or ferrite cores, each just over 1 millimetre in diameter, through which passed a matrix of fine wires perpendicular to eachother forming an array or grid. When it was magnetized one way, a core represented a one, when magnetized in the opposite direction, it stood for a zero. Cores were random-access devices, which meant that individual cores could be accessed directly by addressing the appropriate wires on the grid, without disturbing any of the other cores. Wang's 1949 design used a single wire and "write after read" electronics to overcome the problem that the act of reading actually erased the memory. Forrester's design used multiple wires and the "coincident current" method of reading and writing. Magnetic core memory had been commercially available since 1953 but in the meantime, the dispute over the Intellectual Property Rights gave rise to many law suits which were eventually settled by IBM purchasing Wang's patents for several million dollars. Forrester's random access memory RAM design became the industry standard for the next 20 years until the advent of cheap semiconductor memory.


1955 101 years after Tyndall showed that a light beam could be transmitted down a curved light pipe, Indian scientist Narinder Singh Kapany working at Imperial College, London showed that a solid glass-coated glass rod, the forerunner of optical fibre is able to transmit light over long distances with little loss of intensity. The glass coating, or cladding, prevents the light from leaking out from the core. By Snell's Law the lower refractive index of the cladding with respect to the core causes the light impinging on the boundary to be reflected back into the core. It was another 11 years before Kao and Hockham repeated this with flexible fibre optical fibres.


1955 Arthur Uhlir and A.E. Bakanowski working at Bell Labs develop the varactor diode or varicap. These are P-N diodes used as voltage-controlled capacitors in tuning circuits including PLL (phase-locked loop) and FLL (frequency-locked loop) circuits and were used extensively in television receivers.


1955 Shockley left Bell Labs to start his own company, Shockley Semiconductor Laboratory (a division of Beckman Instruments). He hired a team of young semiconductor wizards but focused their attention narrowly on producing a four layer diode the Shockley diode which he had proposed in 1950. Intended as a replacement for the relay used by the millions by AT&T in their telephone switching circuits it was ahead of its time, complex and very difficult to produce. Frustrated at not being able to explore their own ideas or the opportunities of the rapidly expanding semiconductor industry and disillusioned with Shockley's management style many of his staff left. The potential applications for the four layer diode were soon captured by the newer integrated circuits and the Shockley diode was relegated to a few niche applications.


1955 British physicist John D. Lawson derived the triple product of the plasma electron density, the energy confinement time and the plasma temperature necessary for ignition to take place in a nuclear fusion reactor. Now known as the Lawson Criterion. See more details in the section on nuclear fusion.


1955 Mathematician Cecil Hastings working at the RAND Corporation published "Approximations for Digital Computers" in which he outlined algoritms for estimating the values of transcendental functions. Algorithms based on his theory, rather than the more common Taylor expansion, were used on the guidance computer of the Apollo 11 to calculate he value of trigonometric functions.

See more about Hastings approximations.


1956 The first TransAtlantic Telephone Cable, TAT 1 was opened in September 1956, almost 100 years after the first TransAtlantic Telegraph Cable was laid in 1858. Owned by a joint venture between American Telegraph and Telephone Company (AT&T) with 50%, the British Post Office(GPO) with 41% and the Canadian Overseas Telecommunications Corporation with 9%, it was laid between Clarenville, Newfoundland to Oban, Scotland, a distance of 1950 nautical miles (2250 statute miles). The signals were actually carried on two cables, one in each direction and each cable contained 51 unidirectional repeaters (amplifiers), spaced at 37 nautical mile (43 mile) intervals to compensate for the signal loss along the cable.


The first commercial voice link across the Atlantic, had been launched in 1927 with just a single radio telephone circuit which carried an average of 2000 calls per year, however radio communications were expensive, insecure and unreliable due to signal fading caused by adverse atmospheric conditions. It was not until the 1940s that the technology for low distortion, low loss coaxial cables and long life, undersea telephone repeaters, necessary for reliable, low cost communications became available.


The TAT 1 project was conceived and managed by engineers at AT&T's Bell Laboratories and the British Post Office Engineering Department. The coaxial cables were designed and manufactured by Submarine Cables Ltd at Erith, Kent in the UK with polyethylene insulation replacing the gutta percha used in previous telegraph cables. DC power for the repeaters was provided by a high voltage on the inner conductor of the cable. The repeaters were designed by Bell Labs and used three highly reliable vacuum tubes providing a gain of 65 dB and 144 kHz bandwidth around a 164 kHz carrier frequency. Though transistors, invented by Bell Labs, had been around since 1947, they were still in the development stage and judged not to be as reliable as the miniature vacuum tubes which had undergone many years of life testing and had many years of experience in military applications under their belt.

Initially the cable was used to carry 36 simultaneous voice channels each with a bandwidth of 4 kHz with one of these channels designated to carry 22 telegraph signals. The channel capacity was later increased to 48 channels by narrowing the available voice bandwidth to 3 kHz.

The repeaters were 8 feet (2.44 m) long with a diameter of 2.875 inches (7.18 cms) tapering down to the cable width of 1.625 inches (4.13 cms) over twenty feet (6 m) and a repeater of this size would normally cause problems in cable laying. To avoid these problems however the TAT 1 repeaters were designed to be flexible so that they could be wound over a standard cable drum thus minimising cable handling problems.


TAT-1 was retired in 1978 as other higher capacity TAT cables with with transistorised repeaters became available until they themselves were overtaken by fibre optic technology in 1988 when the first transatlantic telephone cable to use optical fibre, the TAT-8, went into operation.


1956 The Silicon Controlled Rectifier (SCR) or Thyristor proposed by Shockley in 1950 and championed by Moll and others at Bell Labs was developed first by power engineers at G.E. led by Gordon Hall and commercialised by G.E.'s Frank W. "Bill" Gutzwiller. It is a four layer, three junction pnpn transistor originally conceived simply as a logic switching device but developed as a high current switch to control large amounts of power. Used extensively in motor control, dimmers and similar applications.


1957 (October 4) Russia stunned the world with the launch into orbit of Sputnik (meaning "Fellow Traveller"), the world's first artificial satellite, travelling at the unheard of speed of 18,000 mph (29,000 kph), orbiting the Earth every 98 minutes at an altitude of 560 miles (900 kilometers). Conceived and brought to fruition by engineer and aviator Sergei Pavlovich Korolev, leader of Russia's missile programme, Sputnik 1 was an 84 Kg (184 lbs) Aluminium alloy sphere, 56 cms (22 inches) in diameter containing three Zinc oxide batteries, one powering a thermal regulation system and the other two powering a radio which transmitted temperature and pressure data and a "beep beep" sound which announced its presence to the world below. This was one year before the invention of the integrated circuit.


The Sputnik 1 was launched by a two stage rocket named the R-7 (See Soviet R-7 Rocket). Not only was this an embarrassment to the United States, which prided itself on its leadership in the field of technology, the advanced state and sophistication of the Russian technology indicated to the USA that for the first time they could be within the range of Intercontinental Ballistic Missiles (ICBM) launched from Russia.

Sputnik's cheery "beep beep" signalled the start of the Space Race.


Adding to U.S. embarrassment, Sputnik 2 followed a month later on November 3 carrying a dog named "Laika", the first living animal to orbit the Earth.


See more about Sputnik


Korolev is considered to be the father of modern Russian rocketry. In the 1930s he headed GIRD (Group for the Study of Reactive Motion), a Moscow-based group of rocket enthusiasts who built and tested the first liquid-propellant rockets in the U.S.S.R. After World War II he spent a year in Germany as a member of the Russian team gathering information about the German V2 rocket programme. Returning to Russia in 1946 he was appointed chief designer of Department No. 3 of Stalin's new NII-88 ("Scientific-Research Institute No. 88") with the responsibility of building the R-1 rocket, the Soviet version of the V2. He successfully completed the task by the end of the decade with several R1 launches at the Kapustin Yar test site. In 1956 Korolev's team was restructured into the large design organization known as OKB-1 ("Experimental Design Bureau No. 1").


There followed 10 golden years of achievement by OKB-1 under Korolev's inspirational leadership including:

  • The first Soviet intermediate range nuclear-tipped ballistic missile, known as the R-5M (called the SS-3 by Americans)
  • The world's first submarine-launched ballistic missile, the R-11FM, a modified version of the Scud SS1 Tactical Ballistic Missile, developed at OKB-1 by engineer Victor Makeev, which itself was a derivation of the Wasserfall version of the German V2.
  • The world's first Intercontinental Ballistic Missile (ICBM) known as the R-7 Semyorka (meaning "The Digit 7" or "Group of Seven") (also known as SS-6) which successfully flew 4000 miles (6,500 kms) from Baikonur in Kazakhstan to the eastern tip of the Soviet Union in August 1957.
  • The same year Korolev convinced the Soviet government that he could launch a satellite into orbit around the Earth using the same R-7 ICBM and in October 4, 1957 he launched the world's first satellite, Sputnik 1, into orbit around the Earth, followed by Sputnik 2 a month later.
  • The R-7 became the basis of a family of rockets which launched the Sputnik, Vostok, Voskhod, Luna, Venera and Molniya, spacecraft as well as the basis for the modern Soyuz (meaning "Union") rocket.

  • Korolev followed up with a series of manned spacecraft, launching the first human spaceflight in history Yuri Gagarin in Vostok (meaning "East") in 1961), the first woman in space (Valentina Tereshkova in Vostok 6 in 1963), the first multi-person space flight (Voskhod (meaning "Sunrise") in 1964), and the first spacewalker (Alexei Leonov in Voskhod 2 in 1965).
  • In deep space exploration, OKB-1 launched the first probe to reach the Moon (Luna 2 (meaning "Moon") in 1959), the first to take pictures of the Moon's far side (Luna 3 in 1959), the first to soft-land on the Moon (Luna 9 in 1966), and the first to reach Venus (Venera 3 (meaning "Venus") in 1966.
  • OKB-1 provided the first Soviet surveillance satellites Zenit 2 (Meaning "Zenith") in 1961 capable of unmanned photo reconnaissance and Zenit 4 in 1963 providing higher resolution images. The Zenit spacecraft was a derivative of the Vostok.
  • In 1965, the first Soviet Communications Satellite, the Molniya 1 (meaning "Lightning"), launched into a Highly Elliptical Orbit (HEO), was also the first in the world to provide national TV coverage by satellite.

Korolev, responsible for Russia's great achievement in space, died in 1966 at the age of 59 when he bled to death on the operating table during a botched operation to treat colon cancer.


1957 Japanese research student Leo Esaki, working for Sony, discovered the tunnel diode, the first quantum electron device. It depends on an effect called "quantum mechanical tunneling", discovered by Hund in 1927, which gives rise to the forward characteristic having a region where an increase in forward voltage is accompanied by a decrease in forward current. See tunnel diode characteristic. This negative resistance (dI/dV) region was discovered and exploited by Losev in the 1930s in the design of high frequency oscillators. Tunneling means that a particle such as an electron can pass from one side of a very thin barrier to the other without passing through the barrier. Esaki was awarded a Nobel Prize in physics for his efforts, one of only three Japanese to be so honoured. By coincidence all three had attended the same high school.


1957 The first widely-accepted theoretical understanding of superconductivity was advanced by American physicists John Bardeen, inventor of the transistor, Leon Cooper, and John Schrieffer. Known as the BCS theory, it and won them a Nobel prize in 1972. (The second time for Bardeen.)


1957 The first life saved by applying an electric shock to the heart using the closed chest electrical defibrillator designed by American engineer William Bennett Kouwenhoven. Ventricular fibrillation (VF) is a life threatening condition in which the heart no longer beats but quivers or fibrillates very rapidly 350 times per minute or more. A person cannot survive VF for long. Note that one cause of fibrillation is a low voltage electric shock. The defibrillator works by applying a second more powerful jolt to the heart to restore the normal rhythm. More on electric shocks


1957 The Hamilton Watch Company in the USA claimed the World's first commercial production of electric watches.


Hamilton's first working prototype was made in 1951 by Fred Koehler.. The first retail model launched in 1957, the 500, was designed by Phillip E. Biemiller and James H. Reese led by chief physicist John Van Horn.

Their system was very similar to the Lip electric watch design except that the balance wheel impulses were generated from fixed magnets and a coil integrated into the moving balance wheel, whereas the Lip's impulses are generated from fixed coils with magnets integrated into the moving balance wheel.

The performance was also very similar to that of the Lip electric watch.


See more about The Hamilton Oscillator and How it Works.


1957 The recombinant Gel SLA or VRLA Battery patented in Germany by Otto Jache working at Sonnenschein Battery. The gel impedes the release to the atmosphere of the Oxygen and Hydrogen gasses produced by the galvanic action of the battery during charging and promotes recombination of these gases thus reducing the loss of electrolyte and increasing the life of the battery.


1957 Patrick J. Hanratty working at General electric un the USA developed PRONTO a programming language for implementing numerical control of machine tools, the basis for Computer Aided Manufacturing and the world's first commercial CAD/CAM software. Modern CAD/CAM software systems are now indispensable tools for the fast turnaround of complex product and tool designs.


See also Micromosaic


1957 Carl J. Frosch and Link Derrick of Bell Labs announce their discoveries, dating from 1954, that the surface of a Silicon crystal can be readily oxidised by heating to about 1200°C in an atmosphere of water vapour or oxygen to form a stable layer of Silicon Dioxide SiO2, an insulator which is impervious to moisture. They showed that this passivating layer has three major uses. It can be used simply as a barrier to protect the semiconductor device from contamination, it can be used to mask the surface of Silicon during diffusion allowing the precise placement of dopants through windows etched in the oxide layer and it also supports the application of an overlay of metallic interconnecting circuits which it insulates from the lower layers. This latter property was essential to the development of the planar transistor.


During this development period in 1955 Jules Andrus and Walter L. Bond also at Bell Labs developed the wet chemistry process of photolithography employed by Frosch and Derrick. It uses optical masks and photoresists for masking and etching the oxide layer to create an oxide mask on the surface of the Silicon which in turn exposes only the precise areas to be doped during the diffusion process.


Advances in photolithography providing ever smaller line widths have made the scaling of integrated circuits possible leading to dramatically improved performance. Smaller device geometries increase the component density on the chip allowing more chips per wafer and thus lower manufacturing costs, but equally important, they also allow reduced power consumption and increased operating speed. Shorter tracks reduce the device resistance as well as the electron transit times, smaller gates decrease the device capacitance permitting higher frequency operation.


1957 The concept of computer time-sharing was first described by IBM computer scientist Robert William Bemer in an article in Automatic Control Magazine though not taken up at the time by the company. Around the same time John McCarthy creator of the Lisp programming language and pioneer of Artificial Intelligence (AI) working at MIT's AI lab began work on a developing a practical system.

The usage profile of computer time by individual users is typically characterised by short bursts of activity during data entry or information processing, between which there are long periods while the computer is waiting for input or access to external storage or output devices. Time-sharing enabled the computer's waiting time to be allocated to other users to ensure optimum use of computer time, giving multiple users simultaneous access to expensive computer facilities without increasing the machine's capacity. This background allocation task is imperceptible to the user, however the computer needs special operating systems software to deliver this functionality.

In 1959 McCarthy demonstrated the first computer operating system using his own code to enable time-sharing on an IBM 704 computer to which he had been permitted access. See more about Operating Systems.


The result was the Compatible Time-Sharing System (CTSS) which was demonstrated in 1961, a year after McCarthy had left MIT but it wasn't put into regular operation until the concept was picked up by ARPA for use in the ARPAnet project.


Up to that time computer users had to submit their work for batch processing by an all powerful IT department who controlled the scheduling of expensive main frame computers and the users may have had to wait several hours to receive their results. Time-sharing allowed single users to have interactive access to these machines, greatly increasing the productivity of both the machine and the users.


See also Key Internet technologies and Computer Systems Design.


1958 PEM fuel cells improved by Leonard Niedrach who devised a way of depositing Platinum onto the membrane, which ultimately became known as the "Grubb-Niedrach fuel cell." GE and NASA developed this technology together resulting in its use on the Gemini space project NASA's first manned space vehicles. This was the first commercial use of a fuel cell.


1958 The planar process (diagram) for manufacturing transistors invented by Jean Hoerni at Fairchild Semiconductor. In an attempt to solve the contamination problems of mesa transistors he diffused the junctions down into the Silicon instead of building them up into a mesa. He was then able to deposit a thin layer of Silicon dioxide over the junctions to act as an insulator. Using a photomasking process, holes were etched open in the Silicon dioxide to permit connections to be made to the junctions. Later the addition of a metal layer enabled interconnections to be made, eliminating wires and paving the way for the integrated circuit (See next).


1958 Jack St. Clair Kilby working at Texas Instruments and Robert Noyce working independently at Fairchild Semiconductor invented the first monolithic integrated circuits (Greek: monos - single and lithos - stone) for which they subsequently applied for patents in 1959. Now simply called integrated circuits or ICs. Kilby's IC, a phase shift oscillator, was the first. It incorporated one mesa transistor, three resistors and a capacitor on a single Germanium chip but the interconnections were still by conventional welded wire leads since Kilby had not developed a way of connecting them directly through the chip.


Kilby's integrated circuit was followed six months later in 1959 by Noyce's flip flop IC. It is usually claimed that it was Noyce's use of the planar process for manufacturing the ICs, which enabled the conducting tracks for interconnecting the components to be incorporated onto the silicon substrate, that made the integrated circuit commercially successful. While this is true, Noyce also made use of another technology equally essential to the functioning of the integrated circuit. He needed a method of electrically isolating the individual devices within the IC from eachother and for this he used the concept of back to back PN junctions invented for that purpose in 1959 by Czech-born physicist - Kurt Lehovec of Sprague Electric.


The use of the Lehovec patent was an integral feature of the original integrated circuit chip design and remains fundamental to chip design today and although Noyce himself recognised Lehovec's contribution, the rest of the world seems to have overlooked it. While honours were heaped upon Kilby, both Hoerni and Noyce achieved commercial fame and fortune but the forgotten Lehovec is reported to have said "I never got a dime out of [the patent]."


Although Kilby was first to file for a patent, his application was rejected because it lacked a way of interconnecting the components and Noyce was granted the first patent in 1961. After protracted legal battles, Texas Instruments and Fairchild Semiconductor finally agreed to share their licensing agreements for IC's and Noyce and Kilby, by then considered as co-inventors of the IC, were jointly awarded the US National Medal of Science for their invention. Forty years later in 2000 Kilby was awarded the Nobel prize for physics in recognition of his contribution to the invention of the IC. By that time Noyce had been dead 10 years and Nobel prizes are not awarded posthumously.


A modest man, Kilby is quoted as saying "In contrast to the invention of the transistor, this was an invention with relatively few scientific implications. By and large you could say that it contributed very little to scientific thought."


1958 Invention of the laser announced with the publication of the scientific paper, "Infrared and Optical Masers", by Arthur L. Schawlow, then a Bell Labs researcher, and Charles Hard Townes, a consultant to Bell Labs in "Physical Review", the journal of the American Physical Society.

However Gordon Gould was the first person to use the word "laser". A doctoral student at Columbia University under Charles Townes, the inventor of the maser (a similar device based on microwave amplification rather than optical amplification), Gould was inspired to build his optical laser starting in 1958 but failed to file for a patent for his invention until 1959. As a result, Gould's patent was refused and his technology was exploited by others. It took until 1977 for Gould to finally win his patent war and receive his first patent for the laser. See also Maiman (1960)


1958 Looking for cosmic rays using a Geiger counter installed on the first U.S. satellite, Explorer 1, American astrophysicist James Van Allen and his team working at the university of Iowa discovered the existence of a previously unknown toroidal belt of high energy charged particles or plasma, now called the Van Allen Radiation Belt, encircling the Earth. Held in place by the Earth's magnetic field, it is centred along the Earth's magnetic equator with its intensity diminishing towards the poles and extends from the upper atmosphere through the magnetosphere, or exosphere. The results were confirmed by further tests carried out by Explorer 3 a similar satellite launched later the same year. Radiation levels at different altitudes were mapped by subsequent space probes. See diagram and more about the extent and origins of the Van Allen Belt

Van Allen noted that the particle population of the Earth's radiation belts made it very dangerous for humans to be exposed to this radiation without massive shielding even if they were just quickly passing through it. He also realised that electronic equipment used for communications and control in spacecraft travelling in the region would be similarly vulnerable to radiation damage.


The discovery of the Van Allen radiation belt was considered to be one of the outstanding discoveries of the International Geophysical Year (IGY)

The IGY was a cooperative international scientific project marking the end of the Cold War, starting on July 1, 1957. Sputnik 1, launched on October 4, 1957 was the Soviet Union's contribution to the project.


1959 Canadian Lew Urry patented the first modern primary Alkaline battery. The principle on which the alkaline cell is based, substituting Manganese Dioxide for Mercury Oxide in the Ruben cell, was discovered in the late 1940s just after World War II but it took nearly twenty years of development before the product as we know it today was introduced by Ever Ready and Duracell between 1968 and 1970.


1959 Harry Karl Ihrig of Allis-Chalmers, an American farm equipment manufacturer, demonstrated the first fuel cell powered vehicle using 1008 cells to provide 15kW.


1959 Richard Phillips Feynman published "There's Plenty of Room at the Bottom" describing the manipulation of individual atoms, outlining the principles of nanotechnology though it was not called that at the time.


1959 Jack E. Volder an engineer working on analogue computers for aircraft guidance systems at Convair in Fort Worth discovered a simple, fast and elegant algorithm for developing mathematical approximations of trignometrical functions which he called COordinate Rotation for DIgital Computers, CORDIC. He was able to modify and extend its application to more general transcendental functions and the `CORDIC algorithm was rapidly adopted by the computer industry. Hardware versions are now available in dedicated integrated circuits.

See more about CORDIC in the page about Digital Logic.


1960 Physicist Theodore Harold Maiman working at Hughes Research Labs developed, demonstrated, and patented the first commercially successful operable laser, a device which produces monochromatic coherent light, using a pink ruby medium for which he received worldwide recognition. Amongst its many applications, laser technology is used for fibre optic transmitters, holography, range finders and for precision cutting and welding (the sealed metal cases of energy cells are welded by lasers). See also Gould (1958)


1960 Electroluminescence from an organic material was first demonstrated on anthracene crystals by Martin Pope, and Hartmut Kallmann working at New York University. They discovered that, while organic semiconductors are in principle insulators, they become semiconducting when charge carriers are injected from the electrodes. They found that a "hole" current can flow through an anthracene crystal when contacted with a positively biased electrolyte containing iodine which can act as a hole injector. The organic layers however were several millimeters thick, and due to the low charge mobility and the resulting low conductivity and conversion efficiency, a power supply of around 100 Volts was necessary to generate the light emission. Nevertheless, this was the first step towards the development of the organic LED (OLED).


1960 Dawon (David) Kahng and Mohammed (John) Atalla at Bell Labs invented the metal oxide semiconductor field-effect transistor (MOSFET), a new implementation of the FET in planar form in which the metallic gate was insulated from the semiconducting channel by an insulating layer of Silicon dioxide. PMOS and NMOS MOSFETs were cheaper, smaller, and less power-hungry than bipolar transistors but the first designs were also slower and took a long time to gain market acceptance.


Two years later in 1962 Stefan R. Hofstein and Frederick P. Heiman, two young engineers at RCA's research laboratory incorporated the MOSFET design into the first MOS integrated circuit consisting of 16 Silicon n-channel MOS transistors.


Pioneering work on MOSFETs was also carried out in 1961 at Fairchild by quiet Chinese physicist Chih-Tang Sah, (known as "Tom"), another Shockley alumnus who had followed the "traitorous eight" to Fairchild.


The 'Metal' in the name is an anachronism from early devices where gates were metal, usually Aluminium. In modern chips the gate electrode is formed from polysilicon (polycrystalline Silicon) which is also a good electrical conductor but which can better tolerate the high temperatures used to anneal the Silicon after ion implantation, however they are still called MOSFETs


1960 Epitaxial deposition or epitaxy (Greek: epi - on and taxis - arrangement) developed by Howard H. Loar, Howard Christensen, Joseph J. Kleimack, Henry C. Theuerer and Ian Munro Ross at Bell Labs for growing a new crystal layer of one material on the crystal face of another ( heteroepitaxy ) or the same ( homoepitaxy ) material, such that the two materials have the same crystallographic orientation as the substrate. Very thin crystal layers can be built up in this way allowing better control of the doping thickness and abrupt changes in the doping concentrations providing doping profiles unobtainable with other methods. The substrate is unaffected by the process and may be designed for optimum mechanical strength or thermal conductivity. Performance could be optimised for both high frequency and high power by having a thin base layer and a low resistivity collector on a substantial substrate.


In 1968 Alfred Y. Cho and John R. Arthur also working at Bell Labs perfected molecular beam epitaxy (MBE), an ultra-high vacuum technique that could produce single-crystal growth one atomic layer at a time.


1960 After 3 years of development Wilson Greatbatch an American electrical engineer launched the world's first totally implantable heart pacemaker made possible by the use of a long life Lithium Iodine primary battery which he also developed. Today over 3 million people around the world have electrically powered implants and over 500,000 new pacemakers alone are installed every year.


1960 Ivan Sutherland working at MIT's Lincoln Laboratory developed Sketchpad, which used a light pen to draw on a computer's monitor. It is considered the first step towards Computer Aided Design and the basis on which many commercial CAD packages were founded.


See also PRONTO and Micromosaic


1960 America's Bulova Watch Company launched the revolutionary Accutron electronic watch, the first watch to incorporate a transistor. Guaranteed to be accurate to within 2 seconds a day or 1 minute a month, the design was said to be the greatest advance in the field of watchmaking in over 200 years. At the time the accuracy of a typical mechanical watch was about ±10 seconds per day while Swiss "certified chronometers" were rated at -4/+6 seconds per day.

In 1952, the company president Arde Bulova had asked Swiss electronics engineer Max Hetzel to investigate the potential of producing a battery powered watch to compete with the recently announced electric watches from Elgin and Lip. Hetzel pointed out that their accuracy was limited by the use of a traditional balance wheel oscillator and that greater accuracy could be achieved by the use of electronic oscillators based on the recently developed transistor. He was also aware that a rudimentary clock mechanism using a mechanical escapement connected directly to one of the tines (arms) of a large, 100Hz, tuning fork had been patented in 1866 by Louis Francois Clement Breguet but it had no way to sustain the oscillation for a practical duration.


As a result of his study, Hetzel came up with the concept of an electronic watch with an oscillator based on a tuning fork resonator sustained by a simple electronic circuit incorporating a transistor used as a switch. This also reduced the number of components including moving parts by more than half and eliminated the need for the troublesome electrical contacts used in earlier electric watches. Furthermore, oscillating at a higher frequency than typical mechanical watches, it was less susceptible to mechanical shocks.

While the concept was beautifully simple, it needed exceptionally fine precision engineering to bring it to reality. American engineer William O. Bennett was later assigned to work with Hetzel to turn the design into a viable product. The result was the Accutron 214.


The first prototype watch movements were produced in Switzerland in 1955 and the design was completed in 1959.

Accutron technology was used extensively by NASA in their early space program missions and an Accutron watch movement remains on the Moon's Sea of Tranquility today, in an instrument placed there in 1969 by the Apollo 11 astronauts, the first men on the moon.


See more about The Accutron Watch Design and How it Works.


1960 Ivan Alexander Getting VP of Raytheon's Missile Division with his colleague Shep Arkin proposed to the US Air Force a three-dimensional, time-difference-of-arrival position-finding system developed at Raytheon for tracking and controlling intercontinental ballistic missiles (ICBMs) while on the ground or in flight. It used the same location principles of intersecting hyperbolas as Gee or LORAN..

Six weeks later Getting was appointed founding President of The Aerospace Corporation, a non-profit military systems development organization, where he was responsible for studies on the use of geostationary satellites instead of fixed ground stations for providing the timing signals from which navigation coordinates could be calculated. This was the germ from which Navstar and the Global Positioning System (GPS) were developed. While Getting was the evangelist for the project in the face of early resistance from the Pentagon, the actual job of defining the architecture, developing the hardware and deploying the system was carried through by fellow engineer Bradford Parkinson of the U.S. Air Force who was appointed in 1972 as Department of Defence program director.

At the time the Air Force and the Navy were each working on rival proposals, "Project 621B", a satellite system which depended on ground stations, managed by Parkinson at the Air Force and "Timation" a passive system, independent of ground support, conceived by Roger L. Easton in 1964 at the U.S. Navy Research Laboratory. Parkinson brought these programs together and the final GPS system which emerged used satellite borne atomic clocks and a timing evaluation system based on a 1974 enabling patent 'Navigation Systems Using Satellites and Passive Ranging Techniques' by Easton. It also used the spread spectrum technology, as used in the USAF 621B program, to avoid the possibility of "jamming" to which the original Timation was vulnerable. The first GPS satellite was launched in 1978 and the system was finally completed in 1995.


The GPS system uses a "constellation" of 24 satellites orbiting 12,000 miles high, each circling the globe every 12 hours. Satellite navigation (satnav) receivers use highly accurate timing signals from atomic clocks in four separate satellites to calculate their position. Since radio signals travel at the speed of light, the distance of the receiver from the satellite can be calculated by multiplying the transmission time of the signal from the satellite to the receiver by the speed of light (which is constant at 300,000 kms/sec). By triangulating the distances from four separate satellites, the exact position of the receiver can be calculated. Note that a 10 nanosecond (10-8 seconds) timing error results in a positional error of 3 metres (or 1 foot per nanosecond error) due to the clock alone. Other potential error sources include satellite orbit errors, multipath interference errors, atmospheric conditions and electrical noise. Despite these errors, the standard GPS system is accurate to about 3 metres. Military navigation systems are an order of magnitude better than this.

The GPS signal transmission frequency is 1575.42 MHz.


In 1983, after a commercial airliner Korean Airlines Flight 007 drifted into Russian prohibited airspace and was shot down by Russian military jets, President Ronald Reagan declassified GPS technology, making it freely available for public use to avoid a repetition of such problems.


See also Satellite Technology


1960s The construction of high energy Zinc air button cells made possible by the use of very thin electrodes and Teflon insulation.


1960s Two 1.5 kW alkaline fuel cells, based on the Bacon patents, provided electrical power and much of the drinking water for NASA's Apollo spacecraft which went to the moon.


1961 (April 12) Russian pilot and cosmonaut, Yuri Alekseyevich Gagarin, was the first human to journey into outer space, completing an orbit of the Earth and returning safely in a Vostok spacecraft designed by Sergei Korolev, who four years earlier had ignited the space race with the launch of the Sputnik satellite.


1961 The U.S. Department of Energy (DOE) provides Radioisotope Thermoelectric Generators (RTGs) for NASA. RTGs are Nuclear batteries which generate electrical power from radiation emitted by the decay of certain radioactive isotopes. They are used for space applications and power to unmanned remote installations such as lighthouses.


1961 American engineer Robert H. Riley working at Black &: Decker invented first cordless electric drill. Because of the limitations of the Nickel Cadmium batteries available at the time, Riley's first 4.8 volt cordless drills could only produce 10 to 20 watts, compared with the 200 to 250 plus watts of the conventional mains powered drill. To compensate for the lack of power, more efficient motors were designed and gearing was used to increase the torque but the designs were too expensive to be commercially successful although some more powerful versions were used in the 1960s US Gemini and Apollo space programmes. It was not until 1985, with the advent of the Skil screwdriver which was aimed at a less power hungry application, that the demand for cordless power tools finally took off.


1961 96% of British homes wired for electricity.


1961 Internet visionary Robert Taylor, then a project manager at NASA, used his position to direct seed funding to support the development of key enabling technologies which were essential to the building of the Internet. The initial funding for Graphical User Interfaces (GUI) went to computer scientist Douglas Engelbart, who used it, in part, to invent the computer mouse.


Five years later, then working at ARPA (now DARPA), Taylor kick-started the Internet when he convinced his boss to invest $500,000 of taxpayer money to build a network linking the computers of four major universities in the world's first computer network so that they could cooperate on research and share their information. That network was the ARPAnet, precursor to the Internet.


In 1970 Taylor moved to the recently formed Xerox PARC as associate manager of their Computer Science Lab where he stayed until 1983 being promoted to manager in 1977. Under his inspirational leadership the lab pioneered or perfected many of the innovations we associate with modern computing: the graphical user interface (GUI), icons, pop-up menus, cut-and-paste techniques, overlapping windows, bitmap displays, easy-to-use word processing programs, and Ethernet networking technologies, among others, key building blocks of the personal computer. When Steve Jobs famously visited Xerox PARC in 1979, he was astonished by these innovations and even more so by the fact that they had never been commercialised. He immediately set about incorporating these technologies into his next generation of personal computers, the Lisa and Macintosh. These ideas were subsequently built into the legendary Windows Operating System by Bill Gates.


1961. Rudolf Emil Kalman, a Hungarian working at the University of Columbia in the USA, developed the Kalman Filter. A mathematical technique which enables accurate information to be derived from inaccurate data, it is used in complex control systems with multiple inputs. Initially developed for use in predictive control systems such as those used in spacecraft guidance systems, chasing moving targets, it also finds use for BMS State of Charge determination.


1961 The availability of integrated circuits on single monolithic chips paved the way for the digital revolution enabling the production of low cost logic circuits. Over the years, more and more logic functions were incorporated into single chip packages and a series of logic families was developed as improvements to the semiconductor technology were made. Some major developments are outlined here:

  • 1961 Resistor-transistor logic (RTL)
    Noyce's team at Fairchild introduce the first commercial integrated circuit a set-reset flip-flop, the first of a family of logic circuits built from bipolar junction transistors (BJTs), and resistors. RTL circuits are made up from transistors and resistors arrayed to carry out NOT-OR and NOT-AND functions. Although the technology had its limitations and was soon superseded by more efficient designs it was old reliable RTL which was used in the Apollo computer and guidance system (AGC) which took the astronauts to the moon and back in 1969. RTL is now obsolete.
  • 1961 Multiple Emitter Input Transistor and Transistor-Transistor Logic (TTL)
    P.M. Thomson, working at Plessey labs in the UK, patented the multiple-emitter input transistor which he demonstrated in early TTL logic circuits which were made possible by the device. Unfortunately credit for the invention was later claimed by others who popularised its use and Thomson's name has all but disappeared from semiconductor literature.
  • 1962 Diode-transistor logic (DTL)
  • A breakaway team who had left Fairchild to form Signetics, subsequently led by Orville Baker, announced a second generation logic family which incorporated diodes into the input circuits of the transistor switches increasing the circuit functionality as well as the logic gate fan-in (the number of inputs which can be connected to the gate) and improving the noise immunity.

    RTL and DTL logic circuits were direct implementations in Silicon of the equivalent circuits made with discrete components wired together on a circuit board and the rapid impact of the technology was in part due to its familiarity with applications engineers.

  • 1962 Transistor-Transistor Logic (TTL or T2L)
  • The basic principle of TTL logic was outlined by James L. Buie working at Pacific Semiconductors, but the credit for the invention was eventually taken by Thomas A. Longo, working at Sylvania, who turned the idea into practical devices which he showed at Wescon. They were bipolar devices and had high speed, but suffered from poor noise immunity. The transistor output in a TTL device is connected directly (rather than through a resistor or diode) to a transistor input of the next stage.

    TTL logic was the first to use the potential of the integrated circuit to produce devices which were not possible at the time with discrete components. The switching action of a TTL gate is based on the multiple-emitter input transistor, invented by P.M. Thomson, which replaces the array of input diodes of the earlier DTL logic giving improved speed and a reduction in chip area. TTL is fast but power hungry, creating heat dissipation problems in dense circuits and heavy demands for battery power. It made new circuit configurations possible however and dominated the microelectronics industry through the sixties and into the seventies, when it was largely displaced by CMOS logic in large-scale integration.

  • 1962 Emitter coupled logic (ECL)
  • A bipolar logic design pioneered by Jean Aroot at Motorola. Also called Current Mode Logic (CML), it is the fastest logic family currently available. It operates the transistors in a non-saturating mode, unlike TTL where transistors are either cut off or saturated. ECL is thus faster than TTL but it consumes even more power.

    The ECL logic input is applied to one side of a differential amplifier which has a fixed bias on the other input. See diagram. Since the transistors are always in the active region, they can change state very rapidly, so ECL circuits can operate at very high speed but it also means that the transistors draw substantial amount of power in both states (one or zero) generating large amounts of waste heat. ECL logic also permits a large fan-out (The number of parallel external circuits which the logic gate can drive).

  • 1963 Complementary metal oxide semiconductor (CMOS)
  • Frank Wanlass working with C.T. Sah at Fairchild realised that a complementary circuit of NMOS and PMOS would draw very little current and published the idea of CMOS logic. CMOS shrank standby power by six orders of magnitude over equivalent bipolar or PMOS logic gates and reduced battery power requirements accordingly. Early designs were slower than TTL and sensitive to damage from static discharge but these problems have been overcome. As well as low power dissipation, CMOS also has a small physical geometry permitting very high component densities. CMOS made large scale integration possible and now forms the basis of the vast majority of all high density ICs manufactured today.

  • 1969 Bipolar CMOS (BiCMOS)
  • The problems of integrating bipolar and MOS transistors into a single device overcome by a team led by Lin Hung Chang who published their design for "Complementary MOS-Bipolar Transistor Structure" at the IEEE - IEDM.


1962 The first solid state op-amps were introduced by Burr Brown and G.A. Philbrick Researches, but it was designs by Bob Widlar, working in partnership with Dave Talbert at Fairchild which caused the demand to take off. Widlar avoided using resistors and capacitors where possible using diodes and transistors instead. In particular, dc-biased transistors were used in place of high value resistors. His first design launched in 1963 contained just nine transistors and sold for $300. The op-amp is now the work horse of linear circuits.


A brilliant and prolific designer, Widlar was a larger than life, nonconformist character with a prodigious capacity for the consumption of alcohol. He left Fairchild to join the fledgling National Semiconductor when his boss Charles E. Sporck refused to reflect his perceived worth to the company in his pay packet. A year later Sporck himself moved to National Semiconductor as president becoming Widlar's boss once more, but Widlar's compensation package was by then secure. When a budget crunch led Sporck to stop all expenditures, from buying new pencils to mowing National's lawn, Widlar brought in a sheep in the back seat of his Mercedes-Benz convertible to cut the grass. He retired at the age of 29.


1962 American Nick Holonyak, the first student of Bardeen, now working at G.E. Labs, using Gallium Arsenide Phosphide (GaAsP), invented the first practical light-emitting diode (LED). He also invented the first semiconductor laser to operate in the visible spectrum. Long life light sources, LEDs are now used in displays, remote controls and lasers.


The light emitting properties of semiconducting diodes were first discovered by Round in 1907 but were treated as a curiosity until they were rediscovered in 1922 by Losev who started in depth investigations on semiconductor diodes.


1962 Brian D. Josephson, a graduate student at Cambridge University , predicted that electrical current would flow between two superconducting materials - even when they are separated by a non-superconductor or insulator. This phenomenon is today known as the "Josephson effect" and has been applied to electronic devices capable of detecting even the weakest magnetic fields.


1962 Rachel Carson published "Silent Spring" exposing the hazards to the environment of the pesticide DDT. Coming almost 100 years after the Alkali Works Act, it raised once more the awareness of the complacent, or more likely the uninformed, public about the dangers of the unrestricted use and disposal of toxic chemicals and the need to protect the environment. It replaced complacency with concern and led directly to the upsurge of the conservation and environmentalist movements.


This was both a threat and an opportunity for battery manufacturers. On one hand they extract and process vast amounts of chemicals which their customers eventually dispose of as waste, some of it toxic, and this needs to be regulated and made safer. Already the use of some chemicals such as mercury and cadmium have been banned or restricted. On the other hand battery power is being promoted for transport applications where it can reduce the overall consumption of fossil fuels and the emission of "greenhouse gases", first identified as a problem by Arrhenius in 1896. In some cases, if batteries can not eliminate pollution, at least they can move it away from population centres or enclosed spaces to remote generating plants. Another benefit has been the establishment of recycling to make better use of the world's resources.


1962 American cybernetics and computer scientist Joseph Carl Robnett "Lick" Licklider working at Bolt Beranek and Newman (BBN) proposed the concept of an "Intergalactic Computer Network", allowing multiple users to access and interchange information on a global computer network.

This was during the early days of digital computers when only universities and government establishments posessed the large, so called "mainframe computers" since they were the only organisations with the resources, knowledge and capability for building and exploiting this new technology, but such machines typically supported only a single user. Licklider envisaged the interconnection or networking of these computers by means of dedicated communications lines providing access to multiple users on the network. This was more than ten years before the invention of the personal computer.

He followed up in 1963 with the first step towards his goal, a proposal for creating a time-sharing network which would allow multiple users to have simultaneous access to individual computers. The concept was already known but it had not yet been implemented in a practical system.


Licklider's concepts and the principles on which they were based, covered almost everything that became the Internet and are now also known collectively as the "cloud". The advent of the personal computer, enabled these resources to be made available to the general public over the telephone network, giving them direct access to an unimaginably large information resource, expanding the potential of the global computer network by many orders of magnitude. The invention of the smartphone made this network available to a second wave of users.

Cloud computing today involves a worldwide network of data centres accessible via the Internet giving remote computer installations as well as desktop P.C.s, laptops, tablets and mobile phones, access to storage and processing capacity configured to deliver a range of services. Access may be private or public, shared or exclusive, free or chargeable, and may include services such as security verification, data storage (including distributed databases), data gathering and retrieval, data sharing, resource sharing, cost sharing, billing systems, information resources, information networks, communications, broadcasting, hosting and running applications, systems monitoring, tracking systems and software maintenance.


Later in 1963 Licklider was appointed head of the Information Processing Techniques Office (IPTO) at the United States Department of Defense's Advanced Research Projects Agency (ARPA), where he became an evangelist for these new ideas.

(ARPA and DARPA are the same organisation who have interchanged names back and forth several times. The "D" stands for Defense)

He did not stay long enough at ARPA to turn these concepts into reality, returning to MIT in 1964. He was however one of the most important pioneers in the history of computer science and it was his vision which provided the seed from which the Internet grew.


The Internet evolved out of Licklider's ARPANET but it was left to others to provide the necessary innovation and technologies to make the it happen.


Key Internet building blocks and contributors were:


After Licklider left ARPA he was replaced as head of the IPTO by Ivan Sutherland, inventor of the sketchpad, who carried forward Licklider's ideas for a global computer network.


It is noteworthy that nearly all of the above innovations came from the U.S.A.

Though it did not contribute directly to Internet technology, the FCC's "Carterfone" ruling in 1968 was an essential step in establishing the conditions which freed inventive minds to think up and implement new applications for the biggest machine in the world. Thomas Carter started a trend that brought tremendous benefits to consumers but who's consequences were catastrophic for some of the world's giant telecoms manufacturers.


See also the Communications Revolution


1962 Telstar 1, the world's first communications satellite was launched. It enabled live television for the first time to be exchanged between ground stations in the USA (Andover, Maine) and the UK (Goonhilly Downs) and France (Pleumeur-Bodou), for subsequent broadcasting from terrestrial transmitters. In addition it could carry simultaneously 60 two way telephone conversations. Alternatively it could instead carry 600 multiplexed one way telephone channels. It was conceived as an experimental system designed to prove the feasibility of commercial satellite communications.

The satellite was built by Bell Labs, the research arm of AT&T, at the time the world's largest company, as part of an international collaboration between AT&T, NASA, the British GPO and French PTT, led by John Robinson Pierce.

The project leader responsible for converting Pierce's dreams into reality was Eugene F. O'Neill, who assembled a team of more than 400 people at Bell Labs to work on Telstar. The project cost about $50 million.


Weighing in at only 170 lbs (77 kg) it was powered by Nickel-Cadmium batteries, recharged by 3600 solar cells producing 14 Watts. The signal transceiver and solar panels were designed by James M. Early and contained 1064 transistors. The power amplifier used a single vacuum tube, a travelling wave tube (TWT) amplifier with a power output of up to 4 Watts, designed by Rudolf Kompfner its original inventor. The receiver (uplink) frequency was 6390 MHz and the transmitter (downlink) frequency was 4170 MHz.

The satellite launching system available to AT&T at the time did not have the capability to launch the satellite into a geosynchronous orbit, nor was it possible at the time to maintain a satellite in such an orbit, so that Telstar's availability for transatlantic signal transmissions was limited to the 20 minutes in each 2.5 hour orbit when the satellite passed over the Atlantic Ocean.

Telstar was maintained in its desired attitude by imparting a spin to the satellite as it parted from its launch vehicle. This method, known as spin-stabilisation utilised the gyroscopic forces associated with the spinning satellite to resist external perturbations in the satellite's attitude due to variations in the Earth's gravitational and magnetic fields encountered along its path. Telstar also incorporated a viscous ring mechanical damper which attenuated the coning motion or spin precession of the spinning satellite body which cause instability as the spin ultimately decays.


The ground station antennas in the USA and France were two specially constructed massive steerable horns 94 feet (28.7 m) high and 177 feet (54 m) long weighing 380 tons (340,000 Kg). The UK antenna was an 85 foot (26 M) steerable parabolic dish.


Bell Labs envisaged that a constellation of 40 MEO (Medium Earth Orbit) satellites orbiting at 7000 miles (11,265 kms) altitude in polar orbits and 15 in equatorial orbits would provide service 99.9% of the time between any two points on Earth. This would need to be supported by about 25 ground stations to provide global coverage. Pierce estimated that the cost of such a system would be around $500 million and that the potential traffic provided by AT&T's massive communications network could justify the investment.

The advent of Syncom 3, placed in a geostationary orbit two years later, rendered these plans obsolete.


See more details about Telstar technology.


1962 The Communications Satellite Corporation COMSAT was created by the U.S. Communications Satellite Act of 1962 with the objective of developing commercial and international satellite communication technologies and systems with programmes funded and regulated by the government. Shareholders were major communications corporations and independent investors. Its first president was Canadian born Joseph Vincent Charyk, an early champion of the geosynchronous communications satellite industry. The thirteen man board appointed with the responsibility bringing about this technological revolution was composed of 5 lawyers, 3 company presidents, (from a container shipping line, a pharmaceutical company and an industrial chemicals company), 2 financiers, 1 publisher, 1 trade union chief and 1 engineer!

With the objective of creating and operating satellite services with global coverage, in August 1964 COMSAT helped create the intergovernmental consortium INTELSAT (International Telecommunications Satellite) with COMSAT as its majority shareholder and eleven participating countries holding the balance of the shares. INTELSAT's first satellite was the Early Bird (INTELSAT I) launched by COMSAT in 1965. By 2001, INTELSAT had over 100 members and was privatised and changing its name to Intelsat. Today the number of participating countries has risen to over 140 and Intelsat operates a fleet of over 50 communications satellites providing service to over 600 Earth stations.


See how power and politics affected COMSAT and INTELSAT III


1963 Syncom 2, the World's first experimental geosynchronous communications satellite was launched into orbit in July 1963. Designed by American electrical engineer Harold Rosen and his team, Thomas Hudspeth and Donald D. Williams, at Hughes Aircraft Company, it was the first satellite which could be steered and controlled from the ground. Intended as the first step in the development of a geostationary communications satellite as envisioned by the science fiction author Arthur C. Clarke, Syncom 2, it carried a single two way telephone voice channel and 16 teletype circuits and successfully kept station at the geosynchronous altitude calculated by Herman Potocnik Noordung in the 1920s.

The first satellite telephone call between heads of government was made via Syncom 2 by President John F. Kennedy in Washington D.C. and Nigerian Prime Minister Abubakar Balewa aboard the USNS Kingsport docked in Lagos Harbour.

Syncom 1 launched earlier in the year had failed to get into its planned orbit due to the rupture of its Nitrogen tank and communications with it were lost.

Syncom 3, launched the following year, carrying a broadcast quality television channel in response to the challenge from Telstar, it was the first satellite to be launched into a geostationary orbit.

The success of Syncom experiments resulted in the adoption of the geostationary orbit as the preferred orbit for many communications satellites and led to the development of commercial satellites for Comsat and Intelsat and the widespread deployment of satellites for international and transoceanic telephone, television and data transmissions.


Rosen pioneered the overall concept and design of the Syncom series of satellites. His team were the first to provide solutions for the tricky problems of manoeuvring a satellite out of its launch orbit and placing it into a geostationary orbit and keeping it on station in a stable orientation with its antenna beams pointed towards the ground and a minimum number of solar cells pointed towards the Sun as the satellite rotated. This they did by using a basic passive spin stabilisation system and adding sensors and gas jets to the spinning satellite body to create an active control system which could be used to control the satellite's speed, position and attitude from the ground.

Pierce at Bell Labs had used spin-stabilisation on Telstar purely for maintaining stability. Rosen used the spinning satellite coupled with gas thrusters to manoeuvre the satellite to change its orbit as well as to keep it stable.


Starting the project in 1958:

Rosen devised a design which enabled a satellite to be steered from the ground and which also solved the problems of station keeping caused by drift due to perturbations in gravitation and other forces affecting satellites' motion which made it difficult to maintain them in stable geostationary orbits. Although geostationary satellites appear to be stationary to an observer on the Earth, they are actually hurtling through space at 10,090 feet per second (3075 metres per second).

Years earlier he had attended lectures on the dynamics of rotating bodies at Caltech given by Nobel laureate Carl D. Anderson, discoverer of the positron. Noting that spinning objects, such as tops and gyroscopes, are many times more resistant than inert objects to movement caused by the application of external forces and that rotational forces could exert a powerful stabilising effect on a projectiles, such as artillery shells, bullets and footballs, he realised that this technique of spin-stabilisation could be applied to satellites to reduce their tendency to deviate from a prescribed path.

His major insight however was that by orienting the spin axis of the satellite parallel to the Earth's axis of rotation and incorporating into the satellite body, a single radial (lateral) thruster which could be pulsed with short bursts of gas, synchronised with the angular position of the rotating satellite drum, it could provide two dimensional lateral positioning as well as velocity control along the satellite's orbital path thus enabling both manoeuvring and station keeping of the satellite by active control of its period and orbital eccentricity. (Earlier designs considered by Williams, a gun enthusiast, envisaged ejection of bullets from the satellite body for providing the thrust)

The problem of nutation (wobbling) of the spinning satellite about its spin axis could be solved by hydraulic damping, but Rosen still needed a solution to attitude control, that is the control of the tilt of the satellite's spin axis. This was needed to change the spin axis from the direction parallel to its orbital plane around the Earth, imparted by the launch vehicle, to the direction parallel to the Earth's spin axis for station keeping. Attitude control was also needed for keeping the antenna patterns which revolved with the satellite always pointing towards the Earth as the satellite spun.


Williams provided the solution. A brilliant, Harvard-educated, 27-year-old mathematician and inventor who had previously worked at Hughes, Williams was persuaded by Rosen to join the team in 1959. He was interested in astronomy and the possibility of using satellites for navigation and had been investigating the possibilities of geosynchronous satellites. Rosen had conceived a design for attitude control using four more thrusters which he showed to Williams.

Williams however pointed out that this facility could be accomplished by using only a single axial thruster, similar to the concept of Rosen's radial thruster, and he picked up and ran with the idea. He designed, and built in his garage, an ingenious V Beam solar sensor which could determine both the spin phase of the satellite and the angle between its spin axis and the Sun line (the direction of the Sun), an essential attitude control reference needed to make the system work. He also carried out all the mathematical calculations to convert the ideas into practical systems as well as the calculations for the associated orbital manoeuvers needed to achieve the initial orbit and orientation.

A detailed description and diagrams of the Syncom Satellites and their Station Keeping and Control can be found on the page about Satellites.


Hudspeth, the third member of the team, designed an extremely lightweight antenna and the satellite's electronics and a special light weight TWT amplifier was designed for the project by engineer John Mendel, working in Hughes Radar Lab.


Funding - Sceptics and Saviours

By April 1960 the team had successfully demonstrated a working laboratory prototype of their invention known as the "Dynamic Wheel", but Hughes senior management balked at funding the next stage which was to deploy a fully working system even after Rosen, Williams and Hudspeth each offered to invest $10,000 of their own cash in the project. While $10,000 was a considerable sum in 1960, it was dwarfed by the massive cost of putting a satellite into orbit. (Bell Labs spent $50 millions to get Telstar off the ground)

The reasons for Hughes' caution were many: Because the geosynchronous orbit was much higher than the orbit used for Telstar which had been funded by AT&T, Syncom needed a more powerful launch vehicle and more powerful transmitters, more sensitive receivers and larger antennas. Furthermore, complicated orbital manoeuvers were needed to place it into position requiring very precise control of the satellite's attitude and speed, none of which had been attempted before and which presented a significant risk. Concern was also expressed about whether the signal propagation delay of 240 to 280 milliseconds would hamper the satellite's control systems and whether it would impede normal two way telephone conversations which would be subject to delays of over 0.5 seconds. Then there was the obvious question as to whether this experiment would lead to a profitable business for the company and last but not least, Hughes did not have the financial resources of the mighty AT&T, nor did they have control of, nor even access to, any in house communications network traffic from which a telecommunications operating company like AT&T benefitted.

Rosen made the rounds of government offices including NASA and the Depart net of Defence (DOD), universities and competing electronics companies including GTE, Raytheon and Bell Labs to find encouragement and a financial partner. After Raytheon Corp. offered Rosen and his team jobs and the chance to develop Syncom there, Hughes executives relented and committed their support to the project. Syncom was demonstrated at the 1961 Paris Air Show but no sales resulted. Eventually John Rubel, a former Hughes executive working for the DOD, managed to persuade NASA and the DOD to set up a joint NASA - DOD programme to develop Syncom and a $4 million contract was placed with Hughes in August 1961 for the construction of three satellites based on the Syncom prototype with the DOD providing the associated ground stations.


Patents - Triumph and Tragedy

A patent for an "Apparatus for Changing the Orientation and Velocity of a Spinning Body Traversing a Path" was filed jointly by Rosen and Williams, in 1959 (granted in 1968). However the following year, two weeks after the successful 1960 laboratory demonstration, a second patent application was filed by Williams alone. This was eventually abandoned but filed once more by Williams in 1964 as a continuation-in-part of the earlier application and a patent for "Velocity Control and Orientation of a Spin-stabilized Body" was ultimately granted to Williams in 1973.

In September 1961 Rosen also filed a patent for an "Apparatus providing a rotating directive antenna field pattern associated with a spinning body" which was granted in May 1964. This referred to the so called "de-spun" antenna which was first used by Intelsat 3 and became the standard for spin-stabilised satellites.


Meanwhile Syncom was a great success and at the age of 34 Williams was named one of America's ten outstanding young men of 1965 by the United States Junior Chamber of Commerce but he was a troubled man. The following year he visited Rosen with something on his mind. He apologised for not including Rosen's name on the patent for the satellite control system but Rosen reassured him that no apology was necessary. Returning home later that day Williams put a gun to his head and killed himself while standing in the bathtub. He had been undergoing psychiatric treatment, but had failed to keep a recent appointment.


Epilogue

The "Williams Patent" led to one of the longest patent lawsuits in U.S. history which Hughes filed against the U.S. government in 1973. Hughes claimed the government had used (stolen) the company's patented technology on a number of space programs, including the Department of Defence’s Global Positioning System (GPS) and NASA’s Galileo probe to Jupiter. The government countered by arguing that Hughes had exaggerated the importance of the technology and should not have been granted a patent in the first place. In 1999, a federal judge agreed with Hughes, and ordered the government to pay $154 million for patent infringement. Hughes made many millions more in royalties from commercial exploitation of their technology. Not bad for a project which they had reluctantly supported.

AT&T's Telstar is remembered as the satellite which provided the first live television pictures over the Atlantic, but communications were limited to 20 minutes during each two and a half hour orbit and large steerable antennas were needed to track the satellite. Syncom coming two years later was a much more sophisticated, yet practical, system which didn't have Telstar's deficiencies and became the model on which a generation of communications satellites were based.

Rosen eventually had his name on more than 50 patents, including the basic patent for spin-stabilised satellites and the patent for De- Spun antennas and he became known as the "father of the geostationary satellite" while Pierce was known as the "father of the communications satellite".


1963 South African-born American physicist Allan McLeod Cormack published the first of two papers, (the second in 1964), outlining the theoretical foundations of computerised axial tomography (CAT) scanning or CT scanning for making detailed X-ray images of cross-sections of the head but his papers generated little interest at the time. Unaware of Cormack's work, in 1972 British electrical engineer Godfrey Newbold Hounsfield a radar specialist in the Royal Air Force during World War II, built the first CAT scan machine. The scanner sends hundreds of X-ray beams at different angles through the brain or body and uses a computer to construct detailed cross section images from the received data and a three dimensional analysis of the body's organs can be made from a series of these X-ray cross sections. For their independent efforts, Cormack and Hounsfield shared the Nobel Prize for Physiology or Medicine in 1979.


1963 John Gunn, working for IBM, invented the Gunn diode. It is a negative resistance device used to make cheap microwave oscillators and was one of the first important applications for the semiconducting material Gallium Arsenide.

Remember Mr Gunn if you are caught speeding in a Radar trap.


1964 Working independently on entirely different problems and unaware of eachother's work, Paul Baran a Polish immigrant at the RAND Corporation in the USA and Welsh born mathematician Donald Watts Davies at the UK National Physical Lab (NPL) each came up with the idea of packet switching which coincidentally solved both of their problems.


Baran published first in 1964. It was during the Cold War and the US was afraid that their telecommunications network was vulnerable to attack during a nuclear war. If one or two major cities were hit, it could paralyse a major part of the network. Baran's job was to provide a resilient and secure network which could keep on working even if a major part of it was damaged.


Davies' task was to make more efficient use of the existing Public Switched Telephone Network (PSTN), particularly for data communications between computers through time sharing techniques. Traditional circuit switching techniques used for the PSTN made inefficient use of network resources. They allocate a fixed circuit between the subscribers for the duration of the call even though the channel is most likely unoccupied for most of the time. In voice communications there are huge gaps in the speech, and hence occupation of the channel, because one party is listening while the other is transmitting and even during speech transmission there are short gaps as the caller hesitates or takes a breath. Data communications on the other hand have different characteristics tending to send or receive data in short bursts followed by periods of inactivity. Davies published his alternative networking scheme in 1966 and his assistant Roger Scantlebury shared this information with Larry Roberts and his ARPAnet team the following year.


Packet switching was the solution that both Baran and Davies arrived at. It works as follows:

  • All messages are digital.
  • The sender's data are chopped into packets, each with its source and destination address, and the packets are launched onto the network.
  • The packets find their own way across the network, depending on circuit availability and congestion, directed by routers which read the addresses and send the packets on via the best available route. Individual packets from the same message may even take different routes across the network.
  • Messages are reassembled at the destination and any missing packets are re-sent.

It has the following advantages:

  • Messages can be re-routed around damaged, congested or inactive switching centres.
  • Data streams can be merged so that several subscribers can make simultaneous use of the channel enabling more efficient use of the network.
  • It is error free although it may suffer delays.
  • Communications are scrambled making casual eavesdropping difficult and impossible if the packets are transmitted via different routes.

System dimensioning for packet switching networks, as used in the ARPANet, was based on message queuing theory in communications networks, outlined in the 1962 doctoral thesis of Leonard Kleinrock at UCLA and published in book form in 1964. Kleinrock did not invent the key innovation of packet switching as is often claimed, namely the breaking up of the user's message into segments and sending the segments through the network separately.


The first public demonstration of packet switching was made by the NPL in 1968.

Packet switching was the fundamental communications technology which enabled distributed computer networking and thus the building of the Internet.

It was not until 1973 however that work started on standardising packet switching communications protocols to permit any network to connect to any other. It was TCP/IP that enabled this and thus the spectacular growth of the Internet.


See also Key Internet technologies.


1964 Deaf physicist Robert Haig Weitbrecht, frustrated at not being able to communicate by telephone, joined by fellow deaf colleagues James Carlyle Marsters, a California orthodontist, and electrical engineer Andrew Saks, grandson of the founder of the Saks Fifth Avenue department store, invented the teletypewriter which could send and receive text over a standard telephone line making telecommunications available to the deaf. They called it the TTY. It was not only a major breakthrough for deaf people, it also provided the enabling technology for interconnecting fax machines and for connecting the first generation of personal computers to eachother through the Public Switched Telephone Network (PSTN).


Technical solutions to the problem of sending and receiving text already existed since 1903 in the form of teleprinter or Teletype machines which incorporated a typewriter keyboard with a separate printer and an electrical interface connecting them both to a communications link. A message typed into one machine appeared as text on another similar machine at the other end of the link. Unfortunately Teletype machines required a dedicated communications line with its own exchange equipment since they used a different communications protocol from the standard PSTN connection in most homes and businesses which made the system prohibitively expensive.


Key to the TTY invention was the acoustic coupler modem which provided the device interface to the PSTN. The acoustic coupler housed an external modem device which converted digital signals, in this case the Teletype's Baudot coded signals, into audio tones which could be sent across the telephone network (modulation). A similar device at the receiving end converted the audio tones back to digital signals (demodulation). It used a microphone and a small loudspeaker which were built into a cradle designed to fit around a standard telephone handset. Rubber cups sealed and isolated acoustic paths between the speaker in the coupler and the microphone in the handset and between the earpiece of the handset and the microphone in the coupler. This arrangement allowed the audio tones used by the modems to be passed in both directions over the telephone network. Modem speeds were typically 300 bits per second representing about 30 characters per second, which is a lot faster than a person can type or read, but painfully slow by modern standards.


Acoustic couplers were not the ideal design for modems. Rather than an acoustic coupling, a solid electrical connection between the Teletype and the PSTN would have been preferable being much more reliable and less costly. But network operators claimed that any third party devices hard wired (electrically connected) to their networks could cause irreparable damage to the network through hazardous voltages, excess signal power, line imbalancing, erroneous network control and other faults. They were therefore protected by a U.S. Federal Communications Commission (FTC) regulation which specified that "No equipment, apparatus, circuit or device not furnished by the telephone company shall be attached to or connected with the facilities furnished by the telephone company, whether physically, by induction or otherwise." Underlying this reason however was the network operators' unspoken fear that allowing others to design and sell equipment which could be connected to their networks would threaten their monopolistic control of the lucrative market for end user equipment (known as customer premises equipment (CPE)). For these reasons the TTY was prohibited from connecting directly to the network and was obliged to use a coupler which did not utilise an electrical connection to the network.


It is ironic that the company founded by Alexander Graham Bell, a major part of whose life work was dedicated to helping the deaf, fought desperately, and ultimately unsuccessfully, to frustrate the connection of the TTY to their network.


See also the 1968 Carterfone ruling which solved the problem.


1964 The BASIC computer language, (Beginners All-purpose Symbolic Insruction Code) was designed by American computer scientists, Hungarian born, John G. Kemeny and Thomas E. Kurtz at Dartmouth College, New Hampshire. Intended as a way of making computer programming suitable for non-scientists, it had a simpler though more limited instruction set than Fortan, it was easy to learn and it could run in 4 Kbytes of memory which was all some of the earliest personal computers posessed. It quickly made minicomputers accessible to a much wider user base. It was ready when the first microcomputer was introduced in 1975 and enabled the development of a myriad of new applications, encouraging the adoption and spread of the personal computer.


1964 Patent for the first super flywheel battery issued in Russia to Dr. N.V.Geulia. (See Flywheels)


1964 The printed circuit motor also called the pancake motor patented by French inventor J. Henri-Baudot. It is a fast acting Ironless core motor useful for servo applications and industrial controls.


1964 Americans Donald L. Bitzer, Gene Slottow, and Robert Wilson, working at the University of Illinois, and searching for a flicker free screen which did not need constant refreshing, invented the flat plasma display panel. It consisted of three layers of glass. The centre layer had rows of tiny holes with a mixture of gases in them. Each outer sheet had thin, transparent metallic lines on its outer surface. The lines of each sheet were at right angles to the lines of the other sheet, and a gas-filled hole was found at each point where the lines crossed. The two grids carried a high-frequency electrical voltage sufficient to maintain a glow in the tiny gas cells but not sufficient to start the glow. An electrical signal applied to any pair of lines "turned on" the glow of that cell where the lines intersected, and the sustained voltage maintained the glow until another electrical signal turned it off. Exciting the gases trapped in the grooves and sandwiched between the glass plates created the screen images.


Ironically Bitzer was presented with the 1973 Vladimir K. Zworykin Award from the National Academy of Engineering, which honours the inventor of the iconoscope which the plasma screen replaces.


1965 Concept of Fuzzy Logic controls proposed by Lotfi A Zadeh at the University of Berkley in California. A method of deriving precise information from vague data. Also used for BMS State of Charge determination.


1965 Jim Russell working at Battelle Memorial Institute devised an optical disk data storage system using digital encoding for recording and playing back music. The system was patented in 1970 but Battelle could see no future in it and sold the patents for US$1 million to a company called Optical Recording Corporation (ORC) started by venture capitalist Eli Jacobs in 1971. Jacobs hired Russell and a number of his colleagues and provided the necessary funding for them to demonstrate a working prototype to Philips and Sony in 1974 but the consumer electronics giants too seemed unimpressed. Three years later, Philips and Sony joined forces to develop the compact disk (CD) which they launched on the market in 1982. By 1985 Russell had earned 26 patents for CD-ROM technology. Nevertheless Philips maintains to this day that their Klaas Compaan and Piet Kramer invented the CD despite having settled in 1988 for US$30 million, ORC's claim for patent infringement.


1965 The Fast Fourier Transform (FFT), originally devised by Gauss in 1805, was rediscovered by J. W. Cooley of IBM and John W. Tukey of Princeton who published a paper describing how to perform it conveniently on a computer. The FFT samples a time varying analogue signal and converts it into its frequency components which can be represented digitally for subsequent analysis or processing. It resulted in an explosion of practical applications which could now be performed by digital signal processors (DSP's) which had been previously impractical to implement with analogue hardware.


1965 The world's first commercial Communications Satellite, Intelsat I known as the Early Bird was launched by a Delta D rocket and steered into geostationary orbit above the Atlantic Ocean. It was the first operational satellite to provide a scheduled transoceanic TV service carrying continuous, live television as well as telephone signals between Europe and North America on a commercial basis.

On station in orbit 22,300 miles (37,000 km) above equator, Early Bird provided effective, stable, line of sight communications across the Atlantic Ocean.

Designed and built by Harold Rosen and his team at Hughes Aircraft Company for COMSAT, a public company created for the purpose of exploiting the technology, it was based on Syncom 3, the experimental spin- stabilised satellite they had successfully demonstrated the previous year. It weighed only 85 lbs (39 kg) and could carry 240 telephone voice channels or one television channel but not simultaneously. In order to transmit one television channel, all the telephone voice channels had to be shut down. Primary power was provided by solar cells delivering 45 Watts.

Intended as a test of commercial viability, Intelsat 1 successfully showed that, with large parabolic ground station antennas with diameters of over 85 feet (26 metres) feeding low noise, cryogenic cooled amplifiers, reliable communications could be maintained despite a path loss of over 200 dB. Similarly, the end to end signal delay of 250 milliseconds was found to be acceptable.

Intelsat I became the model for many of the following generation of satellite communications networks.


Originally scheduled to operate for 18 months, Intelsat I was in active service for four years, being deactivated in January 1969. It was however briefly re-activated in June of that year to serve the Apollo 11 moon mission when the Atlantic Intelsat III B satellite failed.


See images and more details about satellite and Intelsat technologies

See also predictions by Arthur C. Clarke in 1945.


1965 The Russian Molniya Satellite, designed by Korolev and his team at OKB-1, was launched from the Baikonur Cosmodrome in Russia into a Highly Elliptical Orbit (HEO), now called the Molniya Orbit after the satellite. It was Russia's first communications satellite and the world's first satellite to provide nationwide TV coverage through a network of ground stations.


The idea of using a highly elliptical orbit as a less expensive alternative to the geostationary orbit to enable satellites to carry a greater payload and at the same time to provide excellent coverage of the northern hemisphere was first proposed by aeronautics engineer Bill Hilton to the British Interplanetary Society in 1959. The opportunity was not taken up by the British government who did not have a communications satellite programme at the time. The idea was however picked up by Korolev who started the development of the Molniya satellite at OKB-1 in 1960. Originally intended as an experimental spacecraft to test the utility of such a satellite for command and control of the armed forces it was also adapted to carry TV and telephony circuits. Production was by NPO PM (Scientific Production Association for Applied Mechanics) in Krasnoyarsk.


See details of the Molniya Satellite and the benefits of Highly Elliptical Orbits.


1965 Gordon Moore, co-founder of Intel, made the empirical observation that the number of transistors per square inch on integrated circuits had doubled every year since the integrated circuit was invented and predicted that this trend would continue for the foreseeable future. This was dubbed by the press as Moore's Law. In subsequent years the pace has slowed down slightly and in 1975 Moore revised his prediction to doubling approximately every 18 months, a prediction which still holds good. Great for consumers, but the consequence for anyone operating in a business making or using integrated circuits is that they need to be both highly innovative and extremely fast moving.

There must ultimately be a limit to this miniaturisation however as, with nanotechnology, circuit elements are beginning approach the size of molecules and atoms.

See Intel's Itanium 2 processor.


1965 The first working IMPATT diode a powerful solid state microwave generator was made by R.L. Johnston, Bernard C. De Loach and B.G. Cohen working at Bell Labs based on ideas originally proposed by Shockley in 1954.


1965 Theodor Holm (Ted) Nelson, American information technology pioneer, coined the term hypertext. Inspired by Vannevar Bush's Memex, he described the concept in his iconoclastic and idiosyncratic book "Computer Lib / Dream Machines", published in 1974, as facilitating non-sequential writing in which the reader could choose his or her own path through an electronic document by means of hyperlinks. It was one of the ideas arising from his Project Xanadu started in 1960 which had the goal of bringing access to computers to the masses, amongst other things, by simplifying the user interface. Nelson did not develop the concept of hypertext into a practical working system and by 1998 Project Xanadu had not achieved its lofty goals but his book which summarised his ideas which were revolutionary at the time became a computer classic.


In 2000 British Telecom (BT), checking through the thousands of patents it inherited from its predecessor the General Post Office (GPO) discovered a patent covering the principle of hyperlinking applied for in 1976 by Desmond Sargent an engineer at its Martlesham research labs. The patent which covered a method of addressing blocks of information stored in a computer and retrieving the information by telephone remotely over the telephone network was ratified in 1989 by the US Patent and Trademark Office. The state owned GPO awarded Sargent £1 for his efforts, but failed to commercialise his ideas. BT's belated attempt to derive royalties from the idea were unsuccessful.


In 1968 Douglas Engelbart's team demonstrated a desktop application of hypertext but it was not until 1989 that the first practical working global hypertext system was introduced by Tim Berners-Lee and this was the spark that ignited the explosive growth of the World Wide Web.


See also Key Internet technologies


1966 Tom Longo, by now working at at Transitron, developed a 16 bit Static Random Access Memory - SRAM chip using TTL technology, the first multi-cell dedicated memory chip. SRAM memory uses the well known flip flop circuits to store the bits and although the flip flop itself can be implemented by only two transistors, each memory cell actually uses four more transistors to control the read and write cycles giving a total of six transistors per cell. Up to that time with the available manufacturing technology it had only been possible to incorporate a few transistors into an integrated circuit and a typical IC with six transistors could only store one bit. It was another four years before semiconductor memories became a serious competitor to the slow, bulky and power hungry ferrite core memory.


1966 The single transistor Dynamic Random Access Memory - DRAM memory cell invented by IBM researcher Robert Dennard as a simpler alternative to the multi-transistor SRAM memory cell used by Longo (above). DRAM uses the presence or absence of a stored charge on a capacitor to represent a bit and a single FET to control the process. With such a low component count it is simpler and less expensive to make and occupies less space on the semiconductor chip allowing a higher packing density. The charge on the capacitor however will tend to drain away so it must be constantly refreshed adding complexity to the external circuitry. In parallel with DRAM it also took four years to scale the design up to achieve practical high density multi-cell memory chips.


1966 Engineers Neil Weber and Joseph T. Kummer, working at the Ford Motor Company Scientific Laboratory in Michigan, demonstrate the Sodium/Sulphur battery system for EV applications, expected to achieve fifteen times the energy density Lead Acid storage batteries.

They went on to develop direct energy conversion devices based on Sodium in 1968


1966 The Committee on Data for Science and Technology CODATA established in France as is an interdisciplinary Scientific Committee of the International Council for Science ICSU, to determine internationally accepted values for all fundamental physical constants. See the results at CODATA 1998.


1966 Bringing up to date Tyndall's 1854 experiment of passing a light beam along a curved stream of water, Chinese born engineer Charles K. Kao and English engineer George Hockham working at STC Labs in England demonstrated that light passing down a fibre optic strand can be used to carry data over a short distances in the laboratory. Building on Kapany's experiments with glass rods, they called their fibre an optical waveguide and predicted that fibre optic communications would be possible if low loss (less than 20 dB/km) optical fibre could be developed. This means that at least 1% of the light entering a 1 kilometer long fibre should emerge at the other end. At the time the best available optical fibers exhibited losses of 1,000 dB/km or more. It was not until 1970 that a team at Corning Glass Works was able to produce practical optical fibres.


1966 President Charles de Gaulle opened the world's first, and still the most powerful tidal power station at Rance estuary, in Brittany. The Rance plant has 24 reversible 10,000 kilowatts power units which permit the tidal flow to work in both directions, from the sea to the tidal basin on the flood and on the ebb from the basin to the sea. About seven-eighths of the power is produced on the more controllable ebb and flow.


It is sad that the only name which remains indelibly associated with this engineering marvel is that of a politician rather than the names of the engineers who conceived on built it.


Although Rance was the first power station, tidal mills were known to be in use from Roman times until the time of the Industrial Revolution.


1967 Development work on the recombinant, sealed-Lead acid (SLA) battery based on AGM technology was begun in 1967 by subsidiaries of The Gates Corp, 10 years after the Jache patent.


1967 The Nickel-Metal Hydride battery was patented by Klaus D. Beccu working at the Battelle Geneva Research Center. Though it was used by Daimler-Benz and Volkswagen it was not until the 1980s that it achieved large scale commercial volumes after Stanford Ovshinsky introduced some improvements to the electrodes.


1967 Dawon Kahng and Simon M. Sze of Bell Lab proposed the first non-volatile memory device which holds the programmed values indefinitely or until deliberately erased. The ancestor of all programmable logic and memory it used floating gate devices to store the information.


1967 In search of ever finer line-widths, IBM engineers R.F.M. Thornley and Michael Hatzakis publish "Electron optical fabrication of solid state devices" in which they outline the process of electron beam lithography for the production of semiconductor process masks. The system uses a focused electron beam similar to that used in an electron microscope to expose polymer resist masks. Because of the shorter wavelengths of the electron beam, very complex device patterns with nanometer size resolution can be created, superior to those possible with optical lithography. Unlike optical masks however which can be exposed with a single flash of light, the electron beam must trace out the pattern which slows down the process.


1967 Fairchild's Rob Walker introduced Micromosaic, the forerunner of the Application-Specific Integrated Circuit (ASIC). It was a logic chip with about two thousand transistors arranged into groups forming 150 freestanding AND, OR and NOT gates and transistors which were initially not connected to each other. The user specified the functions the circuit was to perform and a Computer-Aided Design (CAD) program, developed by Jim Koford and Ed Jones, would determine the necessary interconnections between the gates or transistors and generate the photo-masks required to produce the associated Aluminium overlay which completed the connections. Since the creation of the Aluminium interconnections is one of the last steps in the IC fabrication process, customizing the Micromosaic was not especially expensive. The initial layers of the chip were made in the normal way and only the final photomask for making the interconnections had to be custom-made. Nowadays custom chips may involve many more layers and processes.


Micromosaic was the first real application of Computer Aided Design.


See also Sutherland


1967 American inventor Douglas C. Engelbart from the Stanford Research Institute (SRI) applied for a patent for an X-Y position indicator for a display system. The development was supported by funding arranged by Bob Taylor and the device consisted of a wooden shell with two metal wheels and a button on top which was nicknamed the computer mouse, a name which has stuck. It was first demonstrated at the 1968 San Francisco, Fall Joint Computer Conference together with an amazing array of new applications which used it, all developed by Engelbart's team of 17 researchers.

Visitors were stunned by what later became known as "The Mother of All Demos" which included a graphical user interface (GUI), hypertext, display editing with integrated text and graphics and two way video conferencing with shared workspaces allowing two people to work on the same document from different work stations. These were key enabling technologies which made computers accessible to non specialist users.

It was not until 1983 that the Apple Lisa implemented the GUI for the first time on personal computers and it was not until 1985 that Apple launched the Windows Operating System.


See also Key Internet technologies


1968 After a ten year legal battle, in a ruling known as the "Carterfone" decision, the U.S. Federal Communications Commission (FTC), overturned the long standing prohibition from allowing equipment not supplied by the telephone company to be connected to the telephone network. It thus allowed any new third party equipment to be connected directly to the AT&T and other networks, provided that the supplier could demonstrate that they did not cause harm to the system. This ruling had major repercussions in the telecoms industry opening the possibility of connecting other third party devices to the telephone system, so long as they obeyed the rules, effectively deregulating and liberalising the telecoms equipment market. Other networks eventually followed suit.


The "Carterfone" ruling came about after its designer, Thomas F. Carter a Texas rancher and entrepreneur, acting alone in a David and Goliath battle, took the mighty AT&T, General Telephone and others to federal court. The network operators had tried to ban his "Carterfone" which allowed users to extend the reach of their telephone lines, on a call by call basis, by passing calls to and from Carter's private mobile radio network using a simple acoustic coupler which merely picked up and passed on the acoustic sound signals passing between the phone line and the radio network and vice versa. It was equivalent to putting together the telephone handsets from each network with one handset inverted such that the earpiece of one faced the microphone of the other.

With lots of moral support but no financial support, Carter sold his house and ranch and liquidated all his assets to fund the contest. He argued that the telephone company actions represented a violation of U.S. Anti-Trust law and against the odds he won.


See also the acoustic coupler modem and another battle with AT&T.


The "Carterfone" ruling did not just apply to telephone instruments. It allowed other equipment such as modems and personal computers to be connected to the network, opening the door for many new and innovative applications and services to be carried on the network designed to carry only telephone traffic. Not only that - it allowed innovation to be drawn from a massive pool of new talent from hobbyists to academic institutions and industrial corporations, not just from the narrow confines of the network operators own labs. Without this ruling, the Internet and the World Wide Web would never have happened with the speed and scope as they did and we might still be waiting for email, video news, access to massive knowledge databases, music downloads and social networking.


Although the "Carterphone" ruling only applied in the USA, it marked the start of a trend of liberalisation in the world's telecoms markets. Outside of the USA at that time, apart from Bell Canada who owned Northern Electric, most of the telecoms operating companies did not have their own manufacturing subsidiaries. They were supplied by national champions such as Plessey and GEC in the UK, Alcatel in France, NTT in Japan or by giant multinationals such as Siemens, L M Ericsson and ITT. Development and deployment of telecoms infrastructure projects involved huge investments which were a barrier to new companies entering the market. Furthermore, all of these indigenous manufacturing companies tended to be vertically integrated, essentially locking out new entrants even as suppliers of components or of end users' customer premises equipment (CPE). National telecommunications operating companies controlled the technology, often receiving substantial government subsidies for research and development and engineering, the costs of which could easily be absorbed by their captive subscribers or by tax payers. The operating companies could dictate the specifications. They were supposed to be the innovators but they had a protected market and a guaranteed income and no effective competition. Why take risks?

Telecoms liberalisation came about when people realised the opportunities they were missing by protecting these monopolies. The consequences for some of them were catastrophic. Gone are Northern Telecom, ITT, Plessey and GEC. AT&T, once one of the richest and most powerful companies on the planet is now a mere shadow of its former self.

In return we have choice, provided by a host of new service providers and equipment suppliers, new telephone operating companies competing for our business, inexpensive telephone service, mobile phones and the Internet.

There were of course other factors influencing these developments, including bad management decisions, but liberalisation was the key game changer.


See also Key Internet technologies


1968 A group at RCA, headed by George Heilmeier, demonstrated the first operational Liquid Crystal Display (LCD) based on the dynamic scattering mode (DSM).

This was of similar construction to the more successful twisted nematic LCD design (See description) introduced the following year using similar electrodes and light source but it did not depend on light polarisation and so did not include the polarising filters. It used a different liquid crystal material in which the application of a voltage across the crystal increased the rate of collisions between its ions causing them to scatter the light entering the crystal in all directions. This in turn causes less of the light to emerge from the crystal so that its display appears dark or even black. By contrast, when there is no energising voltage across the crystal the majority of the light passes through unimpeded and appears as white on the display.

A serious drawback with this design was that it needed a high voltage of up to 20 Volts in order to achieve a deep black display with a satisfactory contrast. Costs and power dissipation were consequently very high and the corresponding battery life was very low.


See also Reinitzer (1888), Dreyer (1950), Fergason (1969) and Gray (1970).


1968 American mathematician Donald Ervin Knuth published Volume 1: Fundamental Algorithms of his monumental work The Art of Computer Programming followed in 1969 by Volume 2: Seminumerical Algorithms and finally Volume 3: Sorting and Searching in 1973. The books rapidly became the software engineers' bible, now known as "TAoCP". Two more volumes are in the pipeline.


1968 Kummer and Weber working at the Ford Motor Company Scientific Laboratory in Michigan conceived and patented the Alkali Metal Thermal Electric Converter (AMTEC), an electrochemical direct energy conversion device for converting heat energy into electrical energy which they called the Sodium heat engine. Detailed working principles for practical devices were outlined by Weber in 1974.


1968 The launch of Intelsat III marked the introduction of the De-Spun Antenna technology which enabled high gain, directional antennas to be used with spin-stabilised satellites. The Intelsat III design was basically an upgrade of the Syncom 3 and Intelsat I (Early Bird), the GEO technology satellites pioneered by Hughes, with the addition of the de-spun antenna technology recently patented by Hughes engineer Harold Rosen, But the contract did not go to Hughes on whose technology it was based. Power and Politics played a role in the decision.


The controversial constitution of Comsat's original board had recently changed to bring in some members with relevant experience from three communications operating companies who were also major shareholders in Comsat. The board's fifteen members now included three representatives from AT&T who had developed Telstar the MEO satellite and two representatives from ITT who had partnered with TRW on a Comsat MEO study contract as well as on in their unsuccessful bid for the Early Bird project. The board also included a representative of the "Hawaiian Telephone" operating company. It did not include anybody from Hughes. This new board generated its own controversy by specifying that Intelsat III should be capable of operating in both both MEO and GEO orbits. The procurement decision was further complicated by political influence exerted by the U.S. government, Comsat's regulator, in response to lobbying by Intelsat's national government shareholders for a share of the work.

The result was that Hughes, the company with the leading GEO technology, declined to bid. Comsat eventually dropped the requirement for MEO operation and placed a contract for eight Intelsat III, GEO satellites, with TRW whose previous experience was with MEO satellites and whose original offer was for a design capable of operating at MEO and GEO altitudes. The Intelsat III design team was led by Morris Feigen.

Three of the 8 satellites in the series (F1, F5, F8) were unusable due to launch vehicle failures, and most of the remainder did not achieve their desired lifetimes.


See more about Satellite Technologies.


1969 On the 20th of July 1969, man first set foot on the Moon when American astronaut Neil Armstrong climbed out of the Apollo 11 Lunar Module, followed 17 minutes later by Eugene "Buzz" Aldrin, while Michael Collins remained in lunar orbit in charge of the Apollo Command Service Module (CSM), the Apollo mother ship. They remained there for two and a half hours gathering 22 kg (48 lbs) of samples of lunar soil and rocks and conducting experiments before returning to Earth. It was the culmination of the greatest, most complex and audacious engineering project ever undertaken.


NASA's Apollo Space Program was established in response to President John F. Kennedy's challenge on May 25th 1961, "that this nation should commit itself to achieving the goal, before this decade is out, of landing a man on the Moon and returning him safely to the Earth".

It came just six weeks after Russian astronaut Yuri Gagarin had made a single orbit of the Earth and landed safely, at a time when the US had still not achieved a manned Earth orbital flight and had a total of only 15 minutes manned space flight experience, gained just 20 days beforehand during a sub orbital flight by Alan Shepard in a Mercury capsule launched by a Redstone rocket, a direct descendant of the German V2 rocket,


To achieve the target, the Apollo project involved pushing the limits of just about every known engineering technology from mechanical, hydraulics, electrical power systems, electronics, robotics, chemical and combustion engineering, rocket propulsion systems, explosives, thermodynamics, mechanical structures, dynamics, control systems, computers, software, communications, radar, optics, semiconductors, reliability engineering, telemetry, guidance, navigation, materials science, fabrication processes, electrochemistry, life support systems, human physiology, bio-engineering, aeronautics, ballistics, astrophysics, lunar geology, geography and environment, carried out by 400,000 people and 400 contractors working with 20,000 companies and universities. The mission to the Moon took place in a mostly unknown, hostile environment using around 7,000,000 engineered components, all of which had to perform flawlessly since weight restrictions dictated that very few back-up systems were possible.


The Available Tools

The tools and components the design engineers had available to them in 1961 were primitive by today's standards. Engineers still used slide rules for their calculations, the pocket calculator was not invented until 1972. Engineering drawings were still made by draughtsmen on drawing boards since powerful CAD and graphics software did not become available until later in the decade. The only computing power was by large mainframe computers with shared access and programs and data being input by means of paper tape at a maximum read rate of 1000 characters per second, or even slower with IBM 80 character punched cards. The first patent for an integrated circuit had just been awarded to Robert Noyce in 1961, but the few ICs that were available then contained only one or two transistors and a few passive components and Intel's first microprocessor, the 4 bit, 4004 didn't make its debut until 1971.

Communications, so vital to the project, were also relatively primitive in 1961. The first transmission of television pictures across the Atlantic did not take place until the experimental Telstar 1 satellite was launched in 1962 and the first commercial communications satellite Intelsat 1, the Early Bird, was not launched until 1965. Surprisingly, the very first transAtlantic telephone cable the TAT 1 had come into service just 5 years earlier in 1956 and the Internet and mobile phones were unheard of.


The Challenges

  • Technical Tasks
  • Many of the technologies needed to complete the task were way beyond current capability and experience. These included:

    • Rocket Power - Sufficient to lift a huge payload and to escape from the Earth's gravitational pull. Three stage rockets had been proposed in the past but never implemented.
    • Life Support Systems - Sufficient to sustain life in the vacuum of space and the ability to operate in a weightless environment during the journey, and at one sixth of the Earth's gravity while on the Moon and at 6.5 G during re-entry into the Earth's atmosphere.
    • Navigation - Precise navigation to predetermined points in space up to 250,000 miles from the Earth.
    • Communications - The ability to communicate with mission control throughout the journey, even during the periods when the Earth's rotation made direct radio communication between the spacecraft and mission control impossible.
    • Telemetry and Control - The ability to monitor and control the position of the spacecraft, the status of all its operating systems as well as the bio-systems of the astronauts at all times.
    • Operations - The complex space vehicle with its millions of components, the life support systems operating in the hostile environment of space and the vast array of US and international support services on the ground had to work together flawlessly on the day. There were no second chances.
  • Management Control
  • Project management had to control the efforts of thousands of individual companies and research organisations, delegating their tasks, agreeing specifications, keeping them on schedule and on budget and finally integrating their outputs into a coordinated system.

    This was not always helped by the independence of, and rivalry between, NASA's research centres.

  • Political Considerations
  • America's prestige and security were at stake. The success of the Sputnik satellite and Gagarin's orbit of the Earth were an embarrassment to the US.

    The preliminary cost estimate of the Apollo project was $7 billion in 1961. The final cost was reported to Congress in 1973 as $25.4 billion.

    NASA's quest for the Moon spanned the administrations of four presidents - Eisenhower,  KennedyJohnson and Nixon

    It was initiated during the Cold War and its scope mirrored America's involvement in the 1959 -1973 Vietnam War which peaked in 1968. Maintaining government support for this costly civilian programme through different administrations in times of war was a major challenge.


See The Apollo Moon Shot - 38 Steps to the Moon and Back for details of the mission sequence.


1969 The Apollo 11 spacecraft which took the US astronauts Neil Armstrong, Michael Collins and Edwin E. "Buzz" Aldrin Jr. on their historic moon landing mission used two identical flight control computers, one in the command module and one in the lunar module, to collect and provide flight information and to control their navigational functions. Called the Apollo Guidance Computers (AGC), they were developed for the Apollo program by the MIT Instrumentation Laboratory under Charles Stark Draper, pioneering inventor of inertial navigation systems, with hardware design led by Eldon C. Hall and software design by Hal Laning.


When the Apollo program was announced, computers were large machines working in air conditioned rooms. Mini-computers were in their infancy and personal computers, lap-tops and tablets, like the Internet and mobile phones mentioned above, were also unheard of. The challenge was taken up by Charles Stark Draper and his team at MIT Instrumentation Lab who managed to cram an enormous amount of processing capability into a tiny, robust and reliable package of just one cubic foot in size.


The Apollo AGC was the first flight control computer to use integrated circuits (ICs). For reliability reasons the computer used only one type of IC containing a single 3-input NOR logic gate implemented with tried and tested resistor-transistor logic (RTL) technology and 4100 of these were used. (See diagram and truth table of a 3 Input NOR Gate).

For memory the computer had 4K x 16 bit words of magnetic core RAM and 32K words of core rope memory for ROM. The clock speed was 2.048 MHz. The user interface consisted of an array of seven segment electroluminescent numeric displays and a numeric keypad. Instructions and data were entered manually using two-digit numeric codes.


Skillful, efficient software designed by Hal Laning enabled the spacecraft to be piloted to the moon and back using the computing power of a modern day domestic washing machine. Today's engineers would do well to remember this when they demand powerful microprocessors with megabytes of memory to accomplish the simplest of tasks.


Prior to the Apollo programme, the Mercury spacecraft, flew in 1961 without an on-board computer. The Gemini spacecraft (1965-66) were the first to use on board computer guidance for which they employed a computer with only 4K words of memory.


1969 James Fergason at Kent State University in Ohio discovered the twisted nematic field effect in liquid crystals, and in 1971 he produced the first commercial LCDs based on this effect pioneering their use in digital watches. These displays superseded the poor-quality DSM types. Fergason held over 150 U.S. patents.


See also Gray (1970).


The operating principle of the LCD depends on the polarising properties of the molecules making up the liquid crystal which change the angle of polarisation of the light passing through it when a voltage is applied across the crystal. The light transmission is thus electronically modulated as it passes through the liquid crystal.

LCDs do not emit light directly, instead they work on the principle of blocking light. This means that they must use a separate backlight or a reflector redirecting ambient light through the crystal. (One problem with the reflected light system is that it gives a very dim image in low ambient light conditions.)


The device is made up from many layers sandwiched together. The "meat" in the middle of this multi-layer sandwich is the liquid crystal which is held between two transparent electrodes, a source electrode and a display electrode, which can apply a voltage across the crystal. On either side of this sandwich are polarising filter layers, a source filter and a display filter, arranged parallel to eachother but with their planes of polarisation 90 degrees apart. The outer layers are made glass. The bottom layer carries the light source or mirror and the top layer carries the display.


In operation, the incident light is polarised by the source filter in line with its plane of polarisation. When there is no voltage across the electrodes (the "off" state) the polarised light passes through the crystal unchanged but can not pass through the display filter because its direction of polarisation is perpendicular to the filter's plane of polarisation. No light can get through and the display appears black.

When a voltage is applied across the crystal (he "on" state), the crystal's molecules tend to twist, changing the polarisation of the light as it passes through the crystal. This rotation of the polarisation from 90 degrees of the light exiting the crystal in turn allows some light to reach the display. Increasing the voltage increases the rotation of the light's plane of polarisation letting more light through to the display. As the voltage increases, the intensity of the light reaching the display progresses from black through ever lighter shades of grey until the polarisation plane of the light matches the plane of polarisation of the display filter when all the incident light gets through, appearing as white on the display.

If the LCD is constructed with the polarisation planes of both filters oriented in the same direction, then the 90 degree change in polarisation due to transmission through the crystal will result in a light display which will be white in the "off" state and black in the "on" state.


The display electrode may be divided into large segments, suitable for seven segment alphanumeric displays, which can be addressed and energised individualy to illuminate the corresponding region of the display leaving the rest of the screen dark. Alternatively the display electrode can be divided into tiny pixels enabling the display of images. The system can also be adapted to display coloured images by means of an extra display layer incorporating colour filters.


1969 American George E Smith and Canadian Willard S Boyle while working at Bell Labs on semiconductor memory devices discovered that the semiconductors could be made photosensitive and invented the Charge-Coupled Device (CCD) the first semiconductor image sensor. This device made low cost digital cameras and camcorders possible. The first commercially available CCD image sensor was produced in 1973. The image sensor performs five key tasks: It absorbs photons, it generates a charge (electrons) from the photons, it collects the charge, it transfers the charge across the chip, and it converts the charge to a voltage.

However the high current demands of the CCD image sensor coupled with the LCD viewfinder, (and motors in the case of camcorders), ate up batteries at alarming rates. Later developments of CMOS image sensors cut the sensor power consumption as well as the device costs, with only a slight penalty in image quality, and boosted demand for the digital photography. Thus the battery consumption per camera was reduced but the overall demand was increased.


1969 The Seiko Astron, the world's first quartz wristwatch to be sold to the public was introduced by Seiko of Japan. Its claimed accuracy of ± 5 seconds per month or ± 0.2 seconds per day was extraordinary compared to the ± 10 seconds per day accuracy of a typical mechanical watch of the day. Surprisingly, it had far fewer components, the main ones being a quartz crystal, a hybrid, ceramic circuit board containing 72 transistors and 29 capacitors, a tiny stepping motor and a miniature Silver oxide button cell to provide the power. The hybrid circuit board was soon replaced by an integrated circuit and the oscillator frequency was increased by a factor of 4 from 8,192 Hertz to 32,768 Hertz. Apart from the display gear train, there were no mechanical parts. Because of its low component count, it was easier to produce and the manufacturing cost was also very low.

The simplicity and superior timekeeping of the quartz watch soon made mechanical watches obsolete.


See more about The Quartz Watch and How it Works.


The project arose from a challenge in 1959 to produce a quartz timer for the upcoming 1964 Olympic Games in Japan. At that time the quartz clock was typically a laboratory instrumment based on technology pioneered by Nicholson, Cady and Marrison and Horton in the 1920s and built into two metre equipment racks. A team headed by Tsuneya Nakamura was assigned to the task and they duly met their target but better still, after ten years of research and development they produced the Astron wristwatch.


1969 The world's first computer network went live when permanent links were established between computers at four university campuses, UCLA, Stanford Research Institute (SRI), UC Santa Barbera and the University of Utah in a network called the ARPANet. The project was conceived by Robert Taylor who in 1966 was appointed head of the IPTO, J.C.R. Licklider's old post. Taylor was influenced by Licklider's vision of a global computer network but needed no convincing since he was already frustrated that he needed to sit at different terminals to access each of ARPA's remote computers. He was a man who made things happen and he quickly secured the necessary funding and staff to construct a network connecting ARPA's computers together in the ARPAnet, the precursor of the Internet.


Taylor appointed Lawrence (Larry) G. Roberts from the Lincoln Laboratory as chief scientist and systems architect to manage the ARPANet project which was based on the recently developed packet switching technology. The system used an Interface Message Processor (IMP), what we would now call a router, to provide the packet switching gateway between each computer and the network. The first messages were sent between Leonard Kleinrock's lab at UCLA and Douglas Engelbart's lab at SRI.

Once the network was up and running, more and more institutions were connected to it until it became unsafe for military use. In 1983, the military and civilian parts of the ARPANET were separated into a restricted MILNET and the civilian ARPA Internet run by the U.S. National Science Foundation (NSF). At the same time, TCP/IP was adopted as the standard communications protocol on the civilian network enabling rapid growth as well as attracting international participation. That growth dwarfed the original ARPANet which ceased to exist as a discrete entity as the network became the Internet.


Before coming to ARPA, as a research manager at NASA, Bob Taylor arranged the financing for Doug Engelbart's research projects which pioneered innovations in computer user interfaces at SRI. At ARPA he continued funding Engelbart's SRI research group including Engelbart's legendary demonstration of advanced computer technologies at the famous 1968 Joint Fall Computer Conference in San Francisco.


See also Key Internet technologies


1970 George Gray and colleagues working at Hull University developed stable liquid crystal materials whose optical characteristics could be controlled by voltage rather than heat enabling the development of practical LCD displays.


1970 Engineers Robert D. Maurer, Donald B. Keck, Peter Schultz and Frank Zimar working at the Corning Glass Works in the UK succeed in producing low loss (17 dB/km) optical fibres by doping the glass core with Titanium making practical fibre optic communications possible. It was the purest glass ever made.

A target of less than 20 dB/km loss had been set in 1966 by Kao and Hockham to make fibre optic transmissions practical. A loss of only 17 dB/km gave them double the desired output.


1970 Intel and Fairchild both introduce semiconductor memory ICs which soon replaced the slow and expensive ferrite core memory.

  • Intel launched the first 1Kbits Dynamic RAM (DRAM) chip. It was based on Dennard's DRAM cell concept, however scaling up the single cell design into an integrated circuit involved a multi-skilled team. In this case the original concept for the 1K chip came from by William Regitz ex Honeywell, the cell design was improved at Intel by Ted Hoff and Ted Rowe, the chip design was carried out by Bob Abbott, the overall circuit design was developed by Leslie Vadasz and Joel Karp and the product engineer, John Reed had to make several revisions to the part before acceptable yields and performance were achieved.
  • Fairchild launched the 256 Kbits Static RAM (SRAM) chip designed by Hua-Thye Chua using Longo's SRAM cell concept. SRAM uses six transistors per cell and is thus more complex and expensive than DRAM however the external circuitry is simpler because the cell does not require refreshing because the cells do not lose their data so long as the power is not turned off or until new data is written into the cell. Static RAM is also faster than dynamic RAM, despite its name.
  • In 1971 Intel launched the 2 Kbits Erasable Programmable Read-Only Memory EPROM. Designed by Dov Frohman, it was based on the floating gate technique proposed by Kahng and Sze at Bell Labs. The memory is programmed electrically. UV EPROMs incorporate a quartz window which allows the information to be erased by exposure to ultra violet light while the One Time Programmable OTP versions do not have the erase facility. In mid-1980s the newer bulk-erasable flash memory replaced the EPROM.

1970 Bell Labs engineer Amos Edward Joel developed the cellular phone call handoff system which facilitated the continuity of a mobile phone call as the user moves out of one cell into another by automatically allocating a free frequency channel in the destination cell so that the phone call is not dropped. See also Practical systems


1970 Norman Abramson American engineering professor from Stanford University and keen surfer, looking for a similar position which would allow him more opportunities to enjoy his hobby, enquired at the University of Hawaii whether they had a vacancy for a professor of engineering. He was hired in 1970 joining the staff as professor of both Electrical Engineering and Computer Science with responsibility for Aloha Systems, a project started two years earlier to enable the University's computers, located on campuses on the various Hawaiian islands, to be connected to each other and to the Internet by means of radio links using packet switching technology. Funded by ARPA's Larry Roberts Abramson created ALOHAnet, first deployed in 1970, it was the first modern data network linking computers together in a Local Area Network (LAN). The radio links used 9600 baud radio modems operating on a channel carrier frequency of 407.350 MHz.


An important feature of ALOHAnet was that any number of computers could be connected to the network but there was no priority system for controlling which one had access and all of the computers could transmit data whenever they had data to send without operator intervention. To avoid interference between the transmissions a method of allowing random access was implemented. After a computer launched its data packets onto the network it waited to receive confirmation from the destination computer that the packets had arrived or for notification that other data packets were occupying the link. If the line was already occupied by another packet, known as a collision, then no confirmation confirming reception of the transmitted packet would be received and the sending computer would wait for a random (but very short) period of time and then retransmit the data.


Many of the innovations pioneered on the ALOHAnet provided the inspiration for the development of the Ethernet three years later particularly the system for managing access to the network.


See also Key Internet technologies


1971 Introduction of the world's first single chip microprocessor, the 4 bit, Intel 4004.

In 1969 Intel were asked by Japanese calculator company Busicom to produce a set of 12 custom chips to be used in desktop calculators. Marcian "Ted" Hoff at Intel decided that the calculator functionality could best be implemented with a programmable solution running on a general purpose processor. With Stan Mazor, Hoff designed the system architecture incorporating a simple 4 bit processor using relatively few transistors and a suitable instruction set to run on it. By changing the software the device could be used for other applications. In 1970 Federico Faggin who had recently joined Intel was assigned the task of turning the architecture into Silicon. The resulting product was a 3 chip set with a 2 kbit ROM chip, a 320 bit RAM chip and the 4 bit processor each housed in a 16 pin DIP package. Named the 4004, the processor contained 2,300 transistors and ran at a clock speed of 108 kHz.

Busicom were initially less than impressed with Hoff's departure from their original specification and with his wondrous new device and asked that Intel deliver the 12 chip set they had originally asked for. However they ultimately recognised the power and versatility of his solution and agreed to use it. Production commenced in 1971.


Hoff's concept of programmable devices created a revolution in electronics and ever more powerful processors quickly followed as the potential of the microprocessor was realised.


The Intel 4004 was designed by three men in less than a year. Ten years later the Intel iAPX 432 one of the first 32-bit microprocessors, unveiled in 1981, took a hundred man-years and many millions of dollars to design.


1971 Alan Shugart working at IBM introduced the floppy disk, the first portable "memory disk" as it was called then. It was an 8 inch flexible plastic disk coated with magnetic iron oxide and had a capacity of 200K bytes. In 1976 Shugart, by then working in his own company, followed up with the 5 1/4 inch flexible disk drive and diskette for Wang Laboratories. In 1981, Sony introduced the first 3 1/2 inch floppy drives and diskettes. Now most of these mechanical devices have been replaced by semiconductor memory chips.


See also Disc Operating Systems (DOS)


1971 Bell Labs finally had a working cellular telephone system. Starting in 1962, Bell Labs engineers Joel S. Engel and Richard Frenkiel began work on developing a practical system from the basic concept proposed by Young and Ring in 1947. Developments included portable telephone handsets, base stations, databases, computer systems, user identification and tracking, call set up, billing systems, modulation and multiplexing schemes, signal compression, handoff and frequency re-use and electronics and antenna systems to implement these functions.


  • Call handling - For mobile communications to work, the system needs to know details about the subscriber and where the subscriber is at all times. This information is stored in two databases. A home location register (HLR), at the local telephone exchange of the subscriber's cellular carrier, stores details about the user, the services which have been contracted, billing information and the subscriber's current location, and a temporary visitor location register (VLR), at the nearest base station to the subscriber, which stores the subscriber's identification code (ID) and current location.
  • Call Set-up

    • When the handset is switched on, it listens for a signal transmitted by the base station controller (BSC) in the nearest base station. This control channel is used for user authentication, call handling and assigning a vacant communications channel to the subscriber. If no signal is received the subscribers handset displays an "out of range" or "no signal" message.
    • The user's handset includes an identification (ID) code which contains both the subscriber's details and the identification code of the carrier with whom the subscriber is registered. The handset periodically transmits the subscriber's ID via the control channel to the base station to announce its presence and after handshaking the base station stores the ID code temporarily in the VLR, so long as the handset is switched on.
    • The VLR re-transmits the subscriber's current location to the HLR which updates its files accordingly.
    • When a call is made to the cell phone subscriber, it goes first to the HLR which checks the status of the called subscriber. If the called subscriber is not registered (off line), or busy, or blocked (not subscribed to the required service or a delinquent account) the appropriate message is sent to the caller. If the called subscriber status is "OK" then the call can be routed to the VLR on which the subscriber is currently registered where the local BSC assigns a free communications channel to the called subscriber and the call is completed. At the same time the HLR also checks whether the subscriber is responsible for the call charges (overseas calls) and records the charges accordingly.
    • When the subscriber makes a call from a remote location, the VLR checks with the HLR that the subscribers details are valid and the status is "OK" then the call is routed back through the HLR which initiates call completion via the PSTN (public switched telephone network) or a cellular network and registers the appropriate call charge.
    • Overlaid on these transactions are various security measures.
  • Handoff is a method of allowing users to roam between cells without losing the call. It works automatically by checking the signal strength in the user's handset of signals from nearby base stations and negotiating with the base stations to lock on to the strongest signal. At the same time the handset is re-registered on the VLR associated with the new base station and assigned a new communications channel.
  • Frequency re-use enables the frequencies allocated to telephone transmission channels in one cell to be reallocated in neighbouring cells, with a gap of one cell between them, allowing many more users to have simultaneous access to the available frequency channels. By using smaller cells even more users could be accommodated. This requires the power of the base station transmitters to be limited so that neighbouring cells do not interfere with eachother.

  • Note that in practice the cells are not regular hexagons with distinct boundaries. The shape depends on the terrain, whether there are any obstructions to the radio signals, the power output of the base station transmitters and the characteristics of the base station antennas. Furthermore the base station signals do not end at the cell "boundaries", they continue beyond the boundaries becoming progressively weaker. The received signal strength reduces according to the inverse square law the further the receiver is away from the transmitter. As the receiver approaches an adjacent cell it comes within the range of low level signals from that cell's transmitter which become progressively stronger. The cell boundary is considered to be the point at which the strengths of the signals from the two adjacent cells are equal and this is the point at which handoff is initiated.

  • Multiplexing involves electronic switching and signal coding techniques which allow several subscribers to use the same communications channel simultaneously. A wide variety of multiplexing schemes and frequency channel assignments have been implemented by different national carriers and, since these are not necessarily compatible with eachother, multi-band handsets may be required for international use.
  • Analogue and Digital Systems - The voice signals used in the first generation cellular systems were represented in analogue form by varying voltages as in the traditional fixed telephone line systems. In later generation systems however the voice signals are converted into digital form for transmission by the cellular network. Digital systems enable voice transmission with lower distortion and noise as well as signal compression and more complex multiplexing schemes both of which can be used to accommodate more channels in the available bandwidth.

In 1979 a patent covering the practical implementation of cellular communications systems including frequency reuse and handoff that formed that basis of the first analogue cell phone network was granted to engineers Charles A. Gladden and Martin H. Parelman, working at EG&G, a U.S. national defense contractor under contract to the DOE, and assigned by them to the United States Government.


Bell's first commercial cellular phone system was eventually launched by Illinois Bell in 1983.

In the meantime, in 1979, Nippon Telegraph and Telephone Corporation (NTT) launched the first fully automatic, commercial, citywide, cellular network in Japan, an 88 cell system, using Matsushita and NEC equipment and in 1981, the Nordic Mobile Telephone (NMT) system was launched in the Nordic countries with equipment from Ericsson and Nokia. See also Gross's 1950s Mobile Radio Phone System.


It is strange that there are dozens of pages on the internet about the American pioneers of cellular telephony, with some of these pages quoted dozens more times, but references to the contributions of the four companies who got there first are very difficult to find. Unfortunately the engineers who developed the Japanese and European systems tend to remain anonymous.


1971 Patent for the Gamma Electric Cell awarded to black American research engineer Henry Thomas Sampson working at the US Naval Weapons Center. This device is based on principles pioneered by Ohmart in 1951 and uses a source of nuclear radiation surrounded by dielectric materials arranged to capture the radiation thus producing a high voltage energy cell or nuclear battery.

See also the similar Betavoltaic cell


1971 Raymond Samuel Tomlinson an American computer engineer working on local area networks for Bolt Beranek and Newman (BBN) implemented an email system on the ARPANet. Though it had been possible to send text messages on special military networks, at the time it was generally only possible to send email between users on the same computer. By using the @ sign to separate the login name from the host name, Tomlinson made it possible to associate the user with a particular computer and send email between networked computers. Since the ARPANet was the forerunner of the Internet, his system became the one used today on the Internet.


At the time Tomlinson did not consider the idea very important and, according to Forbes magazine, when he showed it to a colleague he said "Don't tell anyone! This isn't what we're supposed to be working on". The idea however was quickly picked up by Larry Roberts, the engineer who had developed the ARPANet, as the preferred way of communication via the network and it soon became the main source of traffic on ARPANet which until then had been a solution looking for a problem.


Tomlinson received no reward for his "killer application".


See also Key Internet technologies


1972 Experimental Sodium/Sulphur battery operating at 350 °C delivering 50kWh installed in a commercial electric vehicle.


1972 The C programming language was developed at Bell Laboratories by Dennis Ritchie. Many of its principles and ideas were taken from the earlier language B, developed by Ken Thompson,also at Bell Labs. B in turn had evolved from its earlier ancestor BCPL ( Basic Combined Programming Language ) developed in 1967 by Martin Richards at Cambridge University in the UK. Continuously improved over the intervening years C is now the language of choice for many embedded software applications such as those used in battery management. Thomson and Richie had earlier developed the UNIX Operating System which saw its first commercial implementation in 1971.


1972 The Pocket Calculator was launched by Clive Sinclair quickly usurping the slide rule which for generations had been the badge of an engineer. Though electronic calculators had been around for some time, the Sinclair "Executive" as it was known, was the first truly portable device, small enough to fit in a shirt pocket.

It used some interesting miniaturisation techniques. To enable the use of smaller batteries, he took advantage of an undocumented feature of the standard Texas Instruments GLS 1802 metal oxide semiconductor (MOS) calculator chip which contained 7000 transistors, to reduce the power consumption by powering it with pulsed current rather than continuous current as recommended. The internal capacitance of the chip was sufficient to hold the voltage up between pulses provided that the voltage was refreshed often enough. With a pulse duration of 1.7 microseconds, a refresh rate of 200 KHz during calculations and 15 KHz in the quiescent state was sufficient to keep the calculator alive. This reduced the power consumption from about 350 milliWatts to 20 milliWatts so that the AA sized batteries normally used could be replaced by 3 small button cells.

With this calculator he also pioneered the use of a keyboard constructed from a single rubber mat with a protruding pip beneath each key which pressed directly on the Beryllium-Copper contacts closing the circuit when a key was pressed. This eliminated the need for springs in every key and simplified the design of the contacts reducing the thickness of the keyboard and dramatically reducing the component count.


1972 Launch of the digital multimeter (DMM) by Chauvin Arnoux


1973 Martin Cooper of Motorola, is considered the inventor of the first modern portable telephone handset. Cooper made the first call on a portable cell phone in April 1973 to his rival, Joel S. Engel, Bell Labs head of research. Later the same year Cooper set up a base station in New York though it took until 1977 before a network was available to use the phones.

The phone was eventually launched as the Motorola DynaTAC, with a price tag of $3995, in 1983 when the first Bell cellular network went live. Its case dimensions were 9 X 5 X 1.75 inches (228 X 127 X 45 mm) with a 4 inch (102 mm) antenna protruding from the top and it weighed in at 2.5 pounds (1.14 Kg). (See Martin Cooper with his DynaTAC)


See also the Apple iPhone


The concept of the cellular mobile phone network was first outlined by Bell Labs in 1947.

The mobile phone alone has created a demand for over 500 Million batteries per year. This in turn has spurred on the development of new battery technologies.


1973 A group headed by Vinton Gray 'Vint' Cerf from Stanford and Robert Elliot Kahn from the US Government Defense Advanced Research Projects Agency DARPA (previously ARPA), began work on addressing the problems of communications between the many independent or proprietary computer networks wishing to communicate with each other. Although individual computer networks may have used packet switching for internal communications, up to that time there were no standards for packet, data and address lengths or signalling systems, so interconnection between networks was not possible. Cerf and Kahn developed the protocol later to be called TCP/IP to standardise the packet switching communications between computers and networks to facilitate universal interconnection.

IP - Internet Protocol specifies how data is cut up into packets and addressed to its destination.

TCP - Transmission Control Protocol ensures that data packets are reassembled in the order in which they were sent and that missing packets ar re-sent.


After ten years of development and negotiations with network users and providers, on 1 January 1983 the US Government's ARPAnet, and every network attached to the ARPAnet, officially adopted the TCP/IP networking protocol. From then on, all networks that use TCP/IP are collectively known as the Internet.

It was still some years more however before Europe formally adopted these standards. The International Standards Organisation (ISO) was trying to promote standards developed by the Open Standards Initiative (OSI) but the organisation was plagued by the need to satisfy the self interests of its member nations. As they each aspired to their perfect system, decisions took a long time to negotiate and were often political. Meanwhile they were overtaken by events as TCP/IP gained acceptance by default while they dithered.


The reason the Internet is dominated by the Americans is that when they found a solution that worked they standardised on it and moved on, whereas in Europe, national interests had to be satisfied in the search for perfection. Ben Segal who introduced the Internet to CERN commented - "The time constant of the ISO committees was longer than the time constant of the technology".


The standardisation of TCP/IP gave rise to the exponential growth of the Internet. The next major development was the World Wide Web.


See also Key Internet technologies


1973 American engineers Robert Melancton Metcalfe and David Reeves Boggs, working at Xerox PARC, invented the Ethernet, a networking technology used for interconnecting computers over short distances in Local Area Networks (LANs) and through routers to the Internet. It was the basis of distributed computer architecture which allowed communications between all the computers on the network, as well as peripherals such as printers and long term memory, via a single, serial communications bus.


Metcalfe was mainly responsible for the concepts while Boggs designed the system components to turn them into reality. Several of the principles involved had been tried out on the ALOHAnet in Hawaii created by Norman Abramson's team between 1968 and 1972.


The system depends on packet switching and a time based multiplexing system with random timing which allows multiple users to share the same communications channel. Any machine is authorised to initiate transmission over the channel at any time but only one transmitter is actually allowed access to complete the transmission at any one time and there is no concept of priority between machines. Transmission protocols ensure that data packets can only be sent when the channel is free in a system called "Carrier-Sense Multiple Access with Collision Detection" -- CSMA/CD. It works as follows:

  • Carrier Sensing - Each device "listens" to verify that there is no communication on the line before transmitting.
  • Collision Detection - If two devices attempt to transmit simultaneously, then a collision occurs (i.e. two messages on the line at the same time).
  • If a collision is detected, a jamming signal is broadcast to notify all devices connected to the channel that a collision has occurred, forcing all devices on the network to reject their current packet and both transmitting devices interrupt their transmission and wait for a random time period before retransmitting. (If the collision rate is very high, the waiting period is increased.)

The network bandwidth requirement depends on the number of users, their individual bandwidth requirements and their traffic patterns. Limitations on the maximum packet size and the need for a waiting time between transmissions limit the system performance.

The original standard ran at 2.94 megabits per second (2.94 Mbit/s) over thick coaxial cable and could connect computers over a range of about one kilometer. The Ethernet is now a family of technologies with higher capacity versions carried over twisted pairs, thin coaxial cables or fibre optic links with speeds ranging up to 100 gigabits per second (100 Gbit/s).


Metcalfe left PARC in 1979 to found 3Com, a computer networking equipment manufacturer.


Later Asynchronous Transfer Mode ATM made it practical to mix time-sensitive traffic with high data rates such as voice and video over the same networks as ordinary data.

It was not until 1997, 24 years later, that WiFi standards bringing Ethernet functionality to wireless LANs were published.


See also Key Internet technologies


1973 American engineer, Mario W. Cardullo was awarded a U.S. patent for an RFID System incorporating an RFID tag with a passive transponder and rewritable memory in the system base transceiver. His initial application was for automated highway toll collection and used, a small passive transponder placed in each vehicle and a base transceiver in each toll booth. The use of similar transponders was also proposed for identifying other remote objects.


Also in 1973, California entrepreneur, Charles Walton received a patent for an RFID device used to control a door entry system which could unlock a door without a key. It used an entry card incorporating an embedded passive transponder which returned a signal to the reader of a transciever near the door. When the reader detected a valid identity number stored within the RFID tag, the reader unlocked the door.


The RFID system was essentially a development of the military IFF (Identification - Friend or Foe) system, first introduced in 1935, which used a transponder to distinguish between friendly and hostile aircraft.

RFID works in a similar way. A signal broadcast from the RFID transceiver base is picked up by an antenna in a transponder in the RFID tag which wakes up and, in a passive system, reflects a signal from its antenna back to the base receiver, or in an active system, broadcasts a stronger signal from the antenna back to base. The transponder signal can be modulated to transmit information about the remote object which is stored in its RFID tag, back to the base transceiver which then stores the information about the remote object in its own memory. The transponder in a passive system is typically unpowered, deriving its transmitter power from its received signal and consequently has a range of only a few feet or meters. On the other hand the transponder in an active system has an independent power source and can have a much longer range depending on the power of its transmitter.

Using radio communications, the RFID system does not suffer the same limitations as similar barcode systems which depend on the proximity and precise placement needed by their optical readers. Furthermore, the RFID tag's transponder memory can hold much more data about the object than a simple paper barcode tag can without resorting to an external database. Transponder memory capacity is typically between 2 kilobytes (KB) and 8KB and the content and data format can be easily customised, though recommended standards do exist.


1973 Engineers at Perkin Elmer (now SVG Lithography) introduce the projection printer for exposing semiconductor process masks, replacing the previous method of contact printing. The combination of projection printing, without mask-wafer contact, with positive photoresist revolutionized photolithography, dramatically reducing defect rates and improving yields.


1974 The ATS-6 (Applications Technology Satellite-6), the sixth in a series of experimental satellites commissioned by NASA was launched to investigate the space environment and test the feasibility of new satellite technologies and applications. It carried 23 experiments including satellite vehicle systems, communications, meteorology and navigation.

ATS-6 was the first GEO satellite to use three axis stabilisation which enabled the use of large deployable antennas and solar panels. It was also the first to use attitude sensing by means of RF interferometry and star tracking which enabled precision pointing and slewing of the satellite. The combination of its high solar energy capture rate and its high gain antenna also enabled it to provide the first Direct TV Broadcasting by Satellite to simple home receivers.


The fundamental stabilisation and interferometry concepts on which the ATS-6 technology depended were developed by William C. Isley and Daniel L. Endres from NASA's Goddard Space Flight Center and the satellite was built for NASA by Fairchild Industries with William A. Johnston as its program manager.

The design and manufacturing of the Attitude Control System were subcontracted by Fairchild to Honeywell Aerospace under the leadership of program manager C. G. Senechal

The design and development of the Telemetry and Command Subsystem and the Interferometer were subcontracted to IBM where the program manager was Q. G. Marble.


See more about ATS 6 and Satellite Technologies.


1974 Paul Werbos brought together research from several sources to develop a neural network model which is the basis of many of today's "self learning" applications.


1974 The semiconducting properties of organic materials discovered and their use as the basis for a bistable switch patented by John E. McGinness, Peter Corry and Peter H. Proctor working on melanin at the University of Texas. Their organic semiconductor switch is currently part of the semiconductor chips collection of the Smithsonian Institution.

Three years later, without citing the Texans' prior art, Heeger, MacDiarmid and Shirakawa published a similar paper on conducting polymers, for which they were subsequently awarded the Nobel Prize.


Mid 1970's Development of the sealed Lead acid (SLA) or valve regulated Lead acid (VRLA) batteries invented by Jache in 1957.


1975 The Nickel Hydrogen Battery (NiH2) patented by J.D. Dunlop, J. Giner, G. van Ommering and J.F. Stockel working at COMSAT in the USA. The U.S. Navy's Navigation Technology Satellite (NTS-2), the first satellite deployed in the Joint Services NAVSTAR Global Positioning System (GPS) launched in June 1977 was the first to use Nickel Hydrogen batteries which were rapidly adopted for powering other Low Earth Orbit (LEO) Satellites and later used in the Hubble Space Telescope.


1975 H. Edward (Ed) Roberts an electronics engineer who owned MITS a small struggling electronics store made history when he developed and sold the Altair 8800 microcomputer, the first successful personal computer. It featured on the cover of the January 1975 issue of Popular Electronics magazine and was an instant hit. The Atair sold in kit form for $397 and shipped with a CPU card containing an Intel 8080 eight bit microprocessor and a one kilobyte memory card which came with only 256 bytes of memory chips and no software. Buyers were able to write short binary machine language programs that could be toggled in through switches on the front panel and the output was displayed as binary data on a row of LED's. It was left to Bill Gates to supply more practical software for the Altair machine. (See next)


Having started a revolution, Roberts sold his business two years later and returned to his native Georgia to pursue his first interest, medicine. He completed medical school and set up practice as a small town doctor.


1975 19 year old student at Harvard, William Henry (Bill) Gates III and his friend 21 year old Paul Gardner Allen, on hearing about the launch of the Altair microcomputer, called MITS and offered them a BASIC - interpreter for their machine even though they didn't have one themselves. They had neither an Altair 8800 nor the Intel 8080 microprocessor chip that ran the computer but they immediately set to work using the school's PDP-10 minicomputer to simulate the Altair and eight weeks later produced a BASIC interpreter which ran in only 4096 bytes of memory. Allen took the software on paper tape to the MITS office and loaded it onto an Altair machine which he had never seen before and it worked first time. Altair produced a 4K memory board to run the software and Microsoft was born.


In similar vein, in 1980 when IBM was looking for an operating system for the new Intel 16 bit 8086 CPU used on their first PC, Gates didn't have a DOS (Disk Operating System), but he convinced IBM that he had one in the pipeline which was almost finished. Microsoft then purchased the rights to QDOS (Quick and Dirty Operating System) written by Tim Paterson of Seattle Computers for $50,000 and repackaged it as MS-DOS. It was written in of 4,000 lines of assembly language code and required only 12K bytes of memory. IBM found over 300 bugs in the first version submitted for testing.

Gates also talked IBM into letting Microsoft retain the rights to sell MS-DOS separately from the IBM PC project, a disastrous decision for IBM who at the time were considered invincible. It led to the establishment of a computer standard that IBM was unable to control which in turn enabled the creation of a market for PC clones and the spectacular rise of the "upstart" Microsoft, all at the expense of IBM.


In 1985 Microsoft's position was further consolidated when it introduced the first version of the Windows Operating System, which integrated into the MS-DOS operating system, the Graphical User Interface (GUI) and much of the new functionality pioneered at Xerox PARC and developed by Steve Jobs.


See also the sad tale of Gary Kildall's CP/M (next)


1976 American microcomputer entrepreneur Gary Arlen Kildall announced CP/M (Control Program for Microprocessors) a Disk Operating System (DOS) which transformed early personal computers from hobbyists' playthings into serious scientific tools or business machines.

Early hobby machines and PCs were hampered by the lack of bulk memory. Semiconductor memory was expensive so that progams and data were stored on punched paper tape or magnetic tape, often from audio cassettes or other audio devices. It was slow and access was difficult. The user had to spool though the entire tape to get the required data and the data was "read only" and could not be modified. The advent of the floppy disk solved these hardware problems but required some clever software to make it work.

Floppy disk storage capacity and access speeds were much higher but their main advantage was that the disk provided the facility for random access of the data and the data could also be over-written. The main task of the DOS software was to break the data into a set of fragments, which could be precisely stored in whatever free spaces were available on the disk, and to locate, retrieve and reassemble these fragments to recover the data whenever it was required.

Because personal computers were single user devices there was no need to provide time sharing for multiple users as in mainframe computers. Similarly the number of input/output ports could be limited to the following individual devices such as a keyboard, a mouse, a graphical display, a printer and one or more data input reading devices including the disk itself since the computer did not have to service the needs of a complete department or business with multiple users.

To enable the system to run on a wide range of manufacturer's hardware, Kildall also introduced the concept of a Basic Input/Output System (BIOS) which was a small block of code which could be modified to customise the DOS for different computer systems without having to rewrite the entire code. The initial CP/M code was hovever built around the Intel 8080 eight bit processor which was in common use at the time.

See also Operating Systems.


Kildall's CP/M operating system eliminated many of the limitations of the early PCs permitting much more complex and useful technical and business applications to be run on the machines.


CP/M was quickly adopted as standard by many PC manufacturers to enhance their product line and generated substantial income for Kildall's company Digital Research, however there was a sad twist to the tale which has become a folk story of the early days of the PC.

In 1980 IBM were looking for a disk operating system for their first personal computer due to be launched the following year. As the world's biggest and most successful mainframe computer systems manufacturer, their traditional business was suppying complex computer systems to large corporate customers. They had an impeccable reputation and were famous for their excellent customer support even though their typical customers' facilities were usually managed by staff with high technical competence. IBM's operations were traditionally strongly integrated. They made all their own components, from plastic parts to semiconductors as well as developing all their own software in house.

But the PC was very different. IBM had never made such small, high volume machines before. They were unfamiliar with the market characteristics and requirements of individual PC users, their priorities, their technical knowledge, their operating methods, their preferred specifcations and their distribution and support requirements. Customer support to the many inexperienced users working on their own could be a nightmare and a potential threat to IBM's reputation. Consequently they planned, for the first time, to out-source the software development.

Digital Research was already working on a new version of their operating system, CP/M-86, which was designed around Intel's 8086 16 bit processor which IBM also intended to use in their new PC. Running Kildall's CP/M-86 software on the IBM PC seemed a perfect match.


On the recommendation of Bill Gates, IBM approached Digital Research and were received by Kildall's wife Dorothy who managed the company's commercial affairs. Meanwhile Gary was out somewhere flying his private plane. Unfortunately they blew it. Details are sketchy, but it appears that Dorothy was presented with a very onerous contract by the imperious IBM representatives which, it has been said, was typical of their supplier relationships and she refused to sign up.

IBM subsequently made a deal, for the supply of the PC operating system software, with Microsoft who didn't appear to have the a problem with the contract. (See above) The IBM PC running MS-DOS became the industry standard and, five years later, Bill Gates became a billionaire at the age of 31.


1976 Stephen Wozniak a drop out from the University of California, Berkley had previously offered the design for a personal computer to his employer, Hewlett Packard, but this was rejected. He subsequently teamed up to pursue his dream with another college drop out, Steven Jobs who as an orphan (Steven Paul) was adopted by Paul and Clara Jobs. In 1976 they produced the Apple I, a single board computer kit which was launched at Wozniak's "Homebrew Computer Club" and it immediately took off.


With "Woz" as the technical wizard and Jobs as the marketing mastermind, they followed up in 1977 with the Apple II which for the first time brought computing to the desktop, liberating it from the mainframe and the high priests of the IT department, creating an enormous world wide interest and demand for personal computing.


The original Apple II specification included a Motorola MOS Technology 6502 microprocessor running at 1 MHz and an 8 bit data bus with 8 KB of ROM storing the Integer BASIC programming language, 4 KB of RAM and an audio cassette interface for loading programs and storing data. The video controller provided either 40 columns by 24 lines of monochrome, upper-case-only text or low resolution (40 X 48) graphics with 15 colours or high resolution (280 X 192) graphics with 6 colours and an NTSC composite video output suitable for display on a TV monitor, or on a regular TV set by means of a separate RF modulator. It was not sold in kit form but in a smart case which included a keyboard.

Based on the Apple I, it also had eight expansion slots one of which was reserved for RAM and ROM updates and the other seven designed to accommodate future applications.


At the time not many people knew what to do with a personal computer but the Apple II quickly caught their imagination. Apple followed up by introducing a series of performance enhancements, accessories and software upgrades ranging from games to business applications including VisiCalc, the spreadsheet which established the personal computer as a serious business tool.

Access to Apple's expansion slots was an important factor in its success as it enabled third party hardware manufacturers to design new applications to run on the machine enhancing its capability even further.


A great leap forward for the company came after Jobs 1979 visit to the Xerox PARC Computer Science Labs where he was amazed at the advanced state of the lab's user interface and connectivity technologies which he soon set about incorporating into Apple's product line in the Lisa and Macintosh computers.


Steve Wozniak was the technical wizard who built the computers but Steve Jobs was the supreme marketing man who built the company.


Two quotes from Steve Jobs when asked what market research he had done.

  • "Did Alexander Graham Bell do any market research before he invented the phone?"
  • "Some people say, 'Give customers what they want.' But that's not my approach. Our job is to figure out what they're going to want before they do. I think Henry Ford once said, 'If I'd asked customers what they wanted, they would have told me, "A faster horse!"' People don't know what they want until you show it to them. That's why I never rely on market research."

By July 2012 Apple had built up cash reserves of $76.4 billion (£47.0 billion) which exceeded the US Treasury Department's operating cash balance of $73.7 billion (£45.3 billion).

By July 2018 Apple had become the most valuable company in the world with a stock market valuation of over $1 trillion.


Apple's early success gave rise to a host of imitators but only Apple has stood the test of time. Some of the personal computer brands which briefly flowered then fell by the wayside, many of which lost huge sums of money on the way, include the following:

  • Acorn, ACT, Altos, Amstrad, Atari, AT&T, BBC, Cambridge Research, Camputers, Coleco, Commodore, Compucolor, Cromemco, Data General, DEC, Dragon, Exidy, Franklin, Grid (Grundy), ICL, IMSAI, Intertec, ITT, Kaypro, Kim 1, Mark 8, Matra, Mattel, Micral, MITS, Nascom, North Star, Ohio Scientific, Olivetti, Oric, Osborne, Philips, Research Machines, Scelbi, Schneider, Sinclair, Sirius, Spectravideo, Sphere 1, SWTP, Tandy, Tangerine, Texas Instruments, Thomson, Timex, Victor, Vienna (Nortel), Zenith and in 2005 the once invincible IBM, creator of the PC standard.
  • There were also many other less famous names which hardly saw the light of day before they expired.

You can't say that the electronics business is not competitive.


See also the Apple iPhone


1976 Austrian engineer Gottfried Ungerboek working at IBM Research Labs in Zurich, published a paper outlining a method of forward error correction of digital signals using convolution coding. The extra bits inserted by the convolution process are used by the receiver to detect and correct errors introduced during transmission. The modulation and demodulation scheme known as Trellis Coding takes its name from state diagram of the receiver signals which resembles the wooden trellis used by gardeners to support their plants.

The technique received little notice at the time but when it was republished in 1982 it was recognised as a major opportunity for increasing the bit rate of modems. Standard telephone lines were band limited to around 3.6 kHz which meant that the theoretical transmission baud rate was similarly limited. A 3,500 Baud signal modulated with 4 bits/symbol (QAM) provided a maximum bit rate of only 14kbits/sec, but in practical systems the best performance achieved was typically 9.6 kbits/sec. It was not possible to achieve higher bit rates simply by increasing the bits per symbol because the demodulator could not reliably detect the smaller state changes from the noise. Even though the transmission of the extra error control bits used some of the available channel bandwidth, the superior noise immunity made possible by Trellis Coding allowed modulation by up to 10 bits per symbol enabling effective modem bit rates to be doubled, greatly increasing their utility.


1976 The principle of shared key encryption which enables two parties, who have no prior knowledge of each other, to jointly establish a shared secret key over an insecure communications channel using asymmetrical key algorithms was published by Stanford researchers Whitfield Diffie and Martin Hellman.


In 1977 MIT researchers Ron Rivest, Adi Shamir and Leonard Adleman published their algorithm known as RSA encryption which enabled more general public key encryption.


Before the advent of public key encryption, communications were encrypted and decrypted using secret private keys known only to the sender and the recipient. Keeping the private keys secret has always been a problem since the key had to be sent between the communicating parties. The diplomatic service used to send secret keys via couriers with the keys in briefcases handcuffed to their arms.

Asymmetrical systems avoided this problem by using different keys for encrypting and decrypting the message, a public key and a private key. The public key is available to anyone to encrypt the message, but the message can only be decrypted by the holder of the private key. The system depends on a mathematical function which can create a public encryption key (a number) from two numbers, the private key, known only to the key holder, but it should be very difficult to perform the function in reverse to derive the two numbers used to generate the key from a knowledge of the key itself.


Such a system is possible by using very large prime numbers p and q both of which are a hundred or more digits long as the private key. The product of these two prime numbers will be an extremely large number N = pq. Factorising the number N to retrieve p and q will be so time consuming, even with the most powerful computer, that it will be impractical. The number N can be published as the public key which anyone can use to encrypt a message, but the message can only be decrypted from a knowledge of p and q, the private key. Practical systems may use more than one level of encryption to make messages even more secure.


After the publication of the RSA encryption it emerged that such a system had already been developed in 1973 at the UK intelligence agency GCHQ. Cryptographer James H. Ellis had been working on the system since 1965, joined in 1973 by mathematician Clifford Cocks and later supported by cryptographer Malcolm J. Williamson but their work was classified as top secret and they were forbidden to publish or patent anything. While Diffie, Hellman, Rivest, Shamir and Adleman all achieved fame and fortune, Ellis, Cocks and Williamson remained in obscurity. GCHQ finally allowed their work to be made public in 1997, one month after Ellis had died.


See also PGP encryption - Pretty Good Privacy


1976 British born, American chemist, M. Stanley Whittingham investigating electrode structures at Stanford University discovered the intercalation mechanism (See diagram) for electrical energy storage. This involved forming the crystalline structure of the electrode materials into layers between which Lithium and other ions could be stored. This in turn enabled ions to be shuttled back and forth from one electrode to the other, creating a high power density, rechargeable battery without the need for the conventional slow, reversible electrochemical transformation. Subsequently, working at Exxon, he constructed batteries based on a Titanium disulfide cathodes and a Lithium-Aluminum anodes to demonstrate the process which was patented in 1977 and assigned to Exxon.

Whittingham's work formed the basis of the future development of the first commercial Lithium batteries.


In 1979 German born, American researcher John B. Goodenough working at Oxford University perfected Lithium-ion rechargeable battery technology. Using metal oxides, a combination of oxygen and a variety of metal elements, he was able to increase the charge and discharge voltages above those produced by Whittingham's cells thus increasing their energy density. The first design to tame Lithium, the lightest and most reactive metal in a stable battery, it used Lithium Cobalt Oxide (LiCoO2) and Lithium Manganese Dioxide (LiMn2O4) based cathodes with a Lithium metal anode. The patents for the invention were however awarded to the UK Atomic Energy Commission (now AEA Technology) who funded the research.


The design was improved in 1985 by Japanese researcher Akira Yoshino. Working at the Kawasaki Laboratory, he replaced the Lithium metal anode material which suffered from dendrite formation, and the consequent possibility of short circuits, with more stable Carbon which had been shown to be capable of intercalating the Lithium ions. The result was a major improvement in safety coupled with a dramatic growth in demand. Sony of Japan were the first to commercialise the technology, manufacturing Lithium-ion cells on an industrial scale.


Subsequently working at the University of Texas in 1996, Goodenough patented the more stable Lithium Iron Phosphate (LiFePO4) cathode chemistry.


Goodenough did not benefit financially from the patents but in December 2000 he was awarded the Japan Prize (and $450,000) by The Science and Technology Foundation of Japan for his invention.


See more about Lithium Batteries


Whittingham, Goodenough and Yoshino were jointly awarded the Nobel Prize in Chemistry in 2019 for the development of Lithium-ion batteries. Goodenough was then aged 97, the oldest person ever to be so honoured.


1977 AT&T and Bell Labs constructed the first prototype cellular telephone system realising a concept which they had first proposed in 1947. Public trials did not follow until 1978.


1977 American Alan J. Heeger, New Zealander Alan G. MacDiarmid and Japanese Hideki Shirakawa, working at the University of Pennsylvania, published their discovery of electrically conducting polymers. It drastically changed the industry view on the potential of polymer materials, and sparked intensive new developments to exploit the organic electronics technology. First came solid state batteries and supercapacitors with plastic electrodes followed by many applications of thin films in active electronic devices such as organic transistors and LEDs.


The members of the group were awarded a Nobel Prize in 2000 for the "discovery and development of conductive polymers".

They have however been accused of Citation Amnesia AKA The Disregard Syndrome for failing to cite prior art by McGinness, Corry and Proctor.


1977 The first Magnetic Resonance Imaging (MRI) body scanner constructed in the USA by Dr Raymond Damadian. MRI scans differ from Computerized Axial Tomography (CAT) Scans which build up an image from x-rays in that there is no exposure to radiation. MRI images are also 20 to 30 times more detailed than CAT scans and can be displayed in colour.

Water constitutes two thirds of the body's weight and MRI depends on detecting differences in water content among the body's tissues and organs which are reflected in a Nuclear Magnetic Resonance (NMR) image. "Nuclear" was later dropped from the "MRI" name to avoid frightening the patients.

The nuclei of the hydrogen atoms in the water are able to act as microscopic compass needles. When the body is exposed to a strong magnetic field, the nuclei of the hydrogen atoms become aligned in a common direction. When submitted to pulses of radio waves, the energy content of the nuclei changes. After the pulse, a resonance wave is emitted when the hydrogen nuclei return to their previous state. The small differences in the oscillations of the nuclei are detected. (Animal magnetism?) Computer processing is used to build up a three-dimensional image that reflects the chemical structure of the tissue, including differences in the water content and in movements of the water molecules.


NMR is not just used to investigate biological samples, NMR techniques are used to map out the connectivity of the atoms as well as the 3-dimensional molecular structure and stereochemistry of the chemicals used in battery manufacture.


Damadian was the first to point out, in a landmark 1971 paper in Science (based on experiments involving lab rats), that MRI could be used to distinguish between healthy and cancerous tissue and in 1972 he filed the first patent for MRI scanning. Despite this, the Nobel Prize for Medicine inexplicably went to Dr Paul Lauterbur of New York Stony Brook University and Peter Mansfield of the UK's Nottingham University for their contributions to the development of MRI scanning.


1977 Engineers John Birkner and Hua-Thye Chua at Monolithic Memories Inc. invented a programmable array logic (PAL) chip. Now more commonly called a PLD or programmable logic device, it is a logic IC that can be programmed by the user. MMI's chip contained 2,048 tiny fuses in the interconnecting lines between the gates which could be blown to create almost any configuration of up to two hundred and fifty AND, OR, and NOT gates. Blowing the fuses is a relatively simple procedure that disconnects some gates while blowing the so called "anti fuses" make connections to others. PLD's are not re-programmable.


1977 American engineers Dennis C. Hayes and Dale Heatherington working on modems for electronic cash transfer and credit card applications at National Data Corp branched out on their own and invented the PC modem designed to run on the recently introduced home computers establishing the critical technology that brought the possibility of Internet connectivity to the masses. Their first products launched in 1977 were 300 bits per second modem boards for the S-100 bus and then for the Apple II computers. Prior to that modems had been used in the 1950s by the U.S. military. The first commercial modem was a full-duplex 300 bits per second device launched by AT&T in 1962. These early products were designed for incorporation into specialist applications with proprietary interfaces. They were difficult to set up and were unsuitable for consumer applications. Acoustic couplers had been available since 1964 but these were also unreliable and difficult to set up. The Hayes PC modem allowed call setup and teardown and data flow under computer control simplifying and speeding up connections to the PSTN and ultimately to the Internet, but before the Internet, the modems were mainly used for connection to hobbyist bulletin boards.


Hayes' initial products were designed to plug directly into the personal computer's main data bus. To supply the proliferation of new PC designs becoming available at the time would have required numerous product variants. To avoid this problem, in 1981 they introduced the SmartModem which connected instead to the computer's RS232 serial interface allowing the modem to be used on any computer with a standard serial port and the Hayes instruction set for controlling modem functions with software became an industry standard.


See also Key Internet technologies


1978 Computer hobbyists from Chicago, Ward Christensen, an IBM mainframe programmer, and Randy Suess, created the first dial up Computerised Bulletin Board Systems(CBBS), the forerunners of modern Internet chat rooms, message boards, e-mail and twitter, bringing computer network connectivity for the first time to the general public, or at least to the technically savvy public. At their heart was a file transfer protocol (later called XMODEM), for sending binary computer files through modem connections (See previous entry above). It was essentially a software terminal emulator application with similar functionality to the dumb terminals which provided access to the mainframe computers in use at the time. Their system however was instead designed to run on the recently launched personal computers enabling users to connect to the bulletin board or to log on to remote computers via the telephone PSTN.

The original CBBS was the computer equivalent of the traditional cork-and-pin notice boards found in libraries, schools and supermarkets. Early systems were very slow and also small allowing only one modem at a time to access the system and users had to wait for their turn. They were managed by volunteer Systems Operators (SysOps), usually from home and the users, mostly hobbyists, could dial into each other's machines and to leave messages for other users.


Initially the bulletin boards tended to be stand alone islands but in 1984 Tom Jennings, from San Francisco set up FidoNet as a world wide store and forward network using software he wrote himself to enable communication between bulletin board systems.

Over the years CBBS functionality was enhanced to allow multiple users simultaneous access to the boards and FidoNet provided gateways to the Internet. At the same time modem technology also advanced rapidly as faster designs were developed in response to the demand for sending large data files and images over the network built up.


Bulletin boards were a popular and inexpensive, hobbyists alternative to the Internet which was initially dominated by the academic institutions and usage reached its peak around 1996 but their popularity waned rapidly once the World Wide Web became established and the improved connectivity and functionality which it offered was realised. Some enterprising SysOps found a new life as Internet Service Providers (ISPs) bringing Internet access to their community of bulletin board pioneers and soon afterwards to the general public.


See also Key Internet technologies


1978 The world's first Compressed Air Energy Storage (CAES) plant, a 290 MW unit belonging to E.N. Kraftwerke, built at Huntorf in Germany. The pneumatic battery


1978 Mechanically refuelable Metal-Air batteries proposed for electric vehicle propulsion by John F. Cooper and Ernest L. Littauer working at Lawrence Livermore Labs. Aluminium Air batteries were proposed as the most suitable cell chemistry. To date, metal-air batteries have not lived up to the promise claimed for them and several research programmes have been abandoned.


1978 Engineers at GCA (now defunct) invent the Step and Repeat System for exposing the photoresist on semiconductor wafers. Instead of using a single photomask for the whole surface of the wafer, a mask is made for a single integrated circuit. The devices on the wafer are then exposed one at a time with the wafer being moved to the next device between each exposure. The process is repeated until the pattern has been replicated across the entire wafer. The step and repeat process enabled major improvements in photolithography with increasing resolution and finer line-widths.


1978 "Speak and Spell" children's educational toy launched by TI. It included a 4-bit micro controller, two 128-kbit ROMs and a speech synthesis chip and was the first use of the digital signal processor (DSP) concept in a commercial product. Using a method known as Linear-Predictive Coding (LPC) more than 100 seconds of linguistic sounds could be stored in a highly compressed format in the 128 KB ROM chip which was very important in the days when ROM space was expensive. The speech synthesis chip allowed the basic sounds to be assembled into intelligent speech.


The original idea was proposed to a less than enthusiastic TI management in 1976 by Paul Breedlove and implemented by new recruit Richard Wiggins and senior designer Gene Frantz. Speak and Spell created a new market for a new type of device and DSP chips turned out to be one of TI's most successful products.


The modern DSP chip is a special-purpose CPU used for digital signal processing applications. It is a programmable device, with its own native instruction code typically providing ultra-fast instruction sequences, such as shift and add, and multiply and add, which are commonly used in math-intensive signal processing. Usually dedicated to a single task, they can be much faster than microprocessors which are designed to be general purpose devices. DSP chips are capable of carrying out millions of floating point operations per second.


They often include dedicated software such as mathematical transforms, for example, the Fast Fourier Transform (FFT) for carrying out special tasks. The first application of the FFT in a DSP was in the analysis of seismic data gathered in oil exploration tests. The FFT enabled the filtering of the desired signals from noise and interference in the seismic data. The effect of reverberations which masked returned signal could thus be removed from the signals reflected from rock strata. The FFT is now used in dozens of applications such as digital filtering, selective amplification of some frequencies and the suppression of others, audio and video signal compression and decompression, encryption and the analysis of complex signals into their spectral components.


Computerised axial tomography (CAT) is an example of DSP's used for image processing. X-rays from many directions are passed through the section of the patient's body being examined. Rather than creating a single photographic image, the DSP converts the detected x-ray signals into digital data which is used in combinations to create images which appear to be slices through the body showing much more detail than in a conventional exposure, allowing significantly better diagnosis and treatment.


DSP's are also widely used in a myriad of consumer products including cellphones, compact disks, sound cards, video phones, modems, hard disks and digital TVs.


1979 Dan Bricklin an MBA student at Harvard Business School conceived the idea of the spreadsheet and together with his friend Bob Frankston from MIT wrote Visicalc. Visicalc was the "killer application" which turned the personal computer from a curiosity into a necessity and rapidly became the indispensable tool for engineers, accountants and marketing planners worldwide. Unfortunately he did not patent the idea, being advised by his patent attorney that software was (at that time) difficult and costly ($10,000) to patent with the likelihood of success being only 10%. He thus missed out on reaping the full rewards of his innovative idea.


1980 Patents issued on the first Zebra Sodium/Nickel chloride cell. Originated in the mid 70s by the Council for Scientific and Industrial Research (CSIR) in South Africa, it was finally developed and patented by the UK Atomic Energy Authority in Harwell.


1980 The high power density, deep cycling AGM (Absorbtive Glass Mat) Lead Acid battery invented. It was introduced for military aircraft in 1985.


1980 The Insulated Gate Bipolar Transistor (IGBT) demonstrated by Indian born B. Jayant Baliga working at General Electric. It is a fast switching device capable of handling very high currents.


1980 American physicist, Carver Mead based at CalTech and transsexual computer scientist, refugee from an unsympathetic IBM, Lynn Conway co-authored the engineering textbook, Introduction to VLSI Systems, which quickly became the leading resource for designers of Very Large Scale Integrated Circuits. Mead had predicted in 1972 that transistors could be made as small as 0.15 microns: - much, much smaller than the 10 micron state of the art technology at the time and he spent the intervening years developing the technology to achieve this submicron goal. Key to this was his development of the silicon compiler, a CAD application, analogous to a software compiler, which allowed the chip designer to specify the functions required on the chip in an easy to understand structured language. The resulting program was then translated by the computer into the tracks making up each layer of the silicon circuit and output to a high resolution plotter which provided the etching patterns for chip fabrication. This technology not only provided the necessary tools for a new generation of microprocessors and complex devices, it also encouraged the setting up of innovative fabless semiconductor companies supported by specialist chip foundries.


Conway, who was born and raised as a boy, did pioneering work on software design and computer architecture at IBM where he was known as Robert, however he was fired when he informed the company he was about to undergo a sex change operation. Continuing her career as Lynn Conway she was elected to the National Academy of Engineering and went on to be appointed Professor Emerita of Electrical Engineering and Computer Science at the University of Michigan.


1980 Intelsat V the first satellite to provide commercial Direct TV Broadcast by Satellite (DBS) service and the first commercial satellite to use three axis stabilisation was launched by Ford Aerospace. The design team was led by Robert E. Berry.

See a description and image of Intelsat V and more about the satellite technologies it used.


1981 The scanning tunneling microscope (STM) was developed at IBM Zurich by German engineer Gerd Binnig and Swiss engineer Heinrich Rohrer. It does not give a direct image of an object like a true microscope does, but explores the structure of a surface by using a stylus that scans the surface at a fixed distance from it. It employs the principles of quantum mechanics and provides a higher resolution image than the SEM. Electrons form the tightly focused illuminating beam tunnel towards the surface of the sample and the current flow depends on the distance from the specimen. An image of the surface is constructed from the pattern of current flows. It is even possible to see, move and position individual atoms, which makes the scanning tunneling microscope an important tool in nanotechnology. (See also Drexler below)

See also TEM and SEM

Binnig an Rohrer shared half of the Nobel Prize in physics in 1986 for their achievement with Ernst Ruska who built the first electron microscope in 1932.


1981 Kim Eric Drexler at MIT in the USA published his paper on nanotechnology describing the physical principles of molecular manufacturing systems - Using nanomachines to make products with atomic precision.


1981 Paul MacCready's Solar Challenger, the first PV Solar-powered airplane, flies.


1982 Solar One, America's first commercial solar-thermal power plant opens in California demonstrating the feasibility of high power solar generating systems. More than 1,800 computer-controlled tracking mirrors reflect sunlight onto a 300-foot boiler tower, where steam is produced for generating 10 MegaWatts of electricity.


1982 Professor Kurt Petersen of Stanford University launched a new technology with his visionary publication, "Silicon as a Mechanical Material", in which he proposed using semiconductor processing techniques and microelectronics materials to build microscopic mechanical and electromechanical components. It became the foundation of the MEMS and NEMS industries.


1983 The first computer with a user friendly Graphical User Interface (GUI) for personal computers, the Apple "Lisa" was released by Steve Jobs from a project which owed many of its ideas to researchers at the Xerox PARC (Palo Alto Research Centre). The concepts had however first been demonstrated by Doug Engelbart in 1968.


1983 The Internet Domain Name System (DNS) was invented by two American computer scientists Jonathan B. Postel and Paul V. Mockapetris, working at the University of Southern California (USC).


Up to that time, the only major computer network was the ARPANet, launched in 1969, which was built to connect research centres across the United States and by 1981 the number of host computers connected to it was still only 213 but the number was rising quickly.

Host computers on the network were identified by numbers, which were perhaps memorable enough with a small number of hosts in the world, but as the network grew it became difficult to remember the numbers. It was easier to remember names such as "darpa.com". These names translated into numbers equivalent to the addresses in the underlying Internet Protocol (IP), so network users didn't have to remember the host computer's number but rather its name. Initially a single database, maintained at the Stanford Research Institute (SRI), called HOSTS.TXT contained all of the network hostnames and their related IP (Internet Protocol) addresses.


But Postel, who made many significant contributions to the development of the Internet, particularly with respect to standards, recognised the limitations of the existing systems to handle the rapid growth in the use of computers as well as the communications and interworking opportunities they were bringing. He asked Mockapetris to help to provide a solution to this challenge. The result was the DNS system which is essentially a centrally managed, distributed database system containing the names and addresses of all the Internet hosts world-wide, with local data being recorded and maintained by local networks, and made available to all users of the Internet. This avoided the need for organisations to build and maintain their own address directories. The DNS standard was eventually adopted as the international directory of Internet domain names and their associated IP addresses and became an essential component of Internet functionality.


In more detail, the DNS is a hierarchical and decentralised system with a "tree and branch" data structure, for naming computers, services, or other resources connected to the Internet or to a private network. It translates the more readily memorised descriptive domain names into the numerical IP addresses needed for locating and identifying computer services and devices. It recorded the address of every mainframe, desktop and laptop computer, server, tablet, smartphone, scanner, printer, modem, router, and smart TV connected to the Internet. In its simplest operating form, when a computer or other device connected to the Internet sends a message to another named device on the network, a communication is first transmitted to a local DNS "name server" which transmits the corresponding digital IP adddress of the named recipient back to the sender. On receipt of the address, the sender may forward the message to the desired recipient, and possibly or altrnatively the address may also be stored temporarily in a cache memory for future use.

More importantly, the DNS also enabled the world-wide expansion of the network by delegating authority to qualified international users to communicate with each-other and share data and resources via the network by means of the underlying network protocols by setting up new autonomous domains and subdomains so long as they adhered to agreed network standards. In this way, responsibility for providing and maintaining the information relating to web addresses could be shared between hundreds of thousands of "name servers" throughout the network.


The DNS generally translates the names to IP addresses that the underlying network understands. A domain name consists of one or more words or parts called labels, that are concatenated, and delimited by dots, such as "www.example.com".

The right-most label represents the top-level domains (TLD)s which are typical organisation types such as as .com, .edu, .net, .org, .int, .gov and .mil. or country codes. The hierarchy of domains descends from right to left. Each label to the left specifies a subdivision, or subdomain of the domain to the right. For example, the first subdivision or second level subdomain could be the name of the company with the third level subdomain identifying a machine (computer, printer, etc.) in the company network.


The number format is not part of the DNS standard, but for reference, the initial DNS system generally translated names to IPV4 (Internet Protocol Version 4) addresses. IPV4 addresses look like this:

  • The IP address consists of two parts, the leftmost part identifies the network and the rightmost part identifies the network node or host.
  • The corresponding numerical address represented by the domain labels is composed of four digital octets (eight bit digital numbers), separated by dots, for a total of 32 bits, which could also be written as four decimal numbers, each in the range 0 to 255 giving a total of 232, or 256 X 256 X 256 X 256, unique numbers which provide for up to 4,294,967,296 addresses.

By 1998 it became clear that the IPV4 addressing standard was insufficient to accommodate the explosive growth of the Internet so the standard was updated to version 6 (IPV6) which increased the potential address length to 128 bits allowing for an enormous 2128 or 3.4X1038 addresses.


See also Key Internet technologies


1983 American engineer Charles W. (Chuck) Hull patented the 3D Printing process or Stereolithography which enabled plastic parts to be created directly from 3D CAD files by the electronic "slicing" of the 3D CAD file into a series of thin cross sections, translating the results into 2D position coordinates, and using these data to control placement of the "build" material. This process is repeated for each cross section and the object is built from the bottom up, one layer at a time. A 3D printing machine builds objects by directing uv light from a computer-controlled laser onto the surface of a vat of photosensitive liquid resin. When the light strikes the surface, the photopolymer solidifies. When one layer is completed, the part is lowered into the vat, a thin layer of new liquid spreads over the surface, and the process is repeated. Because each layer is as thin as 0.001in, complex objects can be made with very fine details.

Hull founded 3D Systems in 1986 to commercialise the technology which was originally used to avoid committing to expensive moulding tools by enabling rapid prototyping, design verification and pattern making. Stereolithography also allows products to be created on demand as well as short production runs of complex components avoiding the cost of expensive production tooling. The technology has subsequently been adapted to work with a range of materials including metals and food products.


1984 Flash memory invented by Japanese Fujio Masuoka working for Toshiba in Japan. It is a form of Electrically-Erasable Programmable Read-Only Memory (EEPROM) that allows multiple memory locations to be erased or written in one programming operation. It uses floating gate construction and depends on quantum tunneling effects induced by relatively high voltages for both writing and erasing. Flash memory us commonly used in USB memory sticks.


1984 Xilinx co-founder Ross Freeman invents the Field Programmable Gate Array (FPGA) chip that can be customized by the user. It was a completely new form of user programmable logic in which the interconnections and hence the logic functions are defined by RAM cells. Typical logic elements include gates, flip flops and RAM lookup tables. Because the functionality is determined by the RAM, most FPGAs are re-programmable. FPGA's are ideal for building software defined products and for prototyping and low volume applications.


1985 Robert Curl, Harold Kroto and Richard Smalley by accident discovered a new class of Carbon molecules called Buckminsterfullerenes during an experiment to replicate the formation of long chain carbon atoms in the outer atmosphere of stars. They had set up an apparatus which vaporises graphite with a high power laser and allowed it to re-form in vacuum. To their surprise they discovered a new molecule consisting of 60 Carbon atoms. Carbon 60 was the third molecular form (allotrope) of Carbon. Diamonds and graphite were the other two. It consists of 60 atoms of Carbon arranged in hexagons and pentagons that resemble a soccer ball or a geodesic dome as designed by Buckminster Fuller.

Also called Buckyballs or Fullerenes, Buckminsterfullerenes are extraordinarily stable and impervious to radiation and chemical destruction. The molecule is already finding experimental use in a wide variety of applications including nanomaterials, superconductors, lubricants, catalysts and electrodes in batteries and capacitors. See also Buckytubes

For this discovery the trio were awarded the Nobel Prize for chemistry in 1996.


Now there are four allotropes. See Graphene (2003).


1985 The first successful DIY cordless power tool the "Twist" screwdriver powered by a 2.4 Volt NiCad battery was introduced by Skil power tools, 24 years after the idea was pioneered by Black &: Decker. Sadly nobody at Skil remembers the name of the employee whose idea did so much to boost the Skil brand name.


1980's Sealed Valve Regulated lead acid battery commercialised.


1986 Serial entrepreneur and inventor Stanford Ovshinsky introduced improvements to the Nickel-Metal Hydride battery originally patented by Klaus Becca in 1967. Performance and energy density were improved by the use of special alloy structures and compositions and improved Ni(OH)2/NiOOH counter electrodes enabling more wide spread commercialisation of the technology.


1986 Gerd Binnig and Heinrich Rohrer working at IBM in Zurich develop the Atomic Force Microscope able not only to photograph individual atoms but to move individual atoms around.

The first practical nanotechnology tool.


1986 Hitachi built and tested a 5 MJ Superconducting Magnetic Energy Storage (SMES) evaluation system storing energy in the magnetic field of a large superconducting coil which was connected to the 6.6 kV power line. The magnetic battery.


1986 Johannes Georg Bednorz and Karl Alexander Müller at the IBM Research Laboratories in Zurich found a new family of high temperature superconductors (HTS), based on ceramic materials which are normally insulators, whose critical temperature reaches 35 °K (-238 °C) and the following year further compounds with critical temperatures of 135 °K (-138 °C). The absence of electrical resistance at practical temperatures enables very high currents to be carried without loss opening up the possibility of a wider range of superconductor applications.


1986 Six Sigma quality standards, tools and techniques, a summary of developments in statistical quality control over the previous 50 years, was named and popularised by Motorola engineer Bill Smith. Six Sigma is actually a numerical measurement of quality. To achieve Six Sigma quality 99.99966% of what you do must be without defects. In other words it is a defect rate of just 3.4 parts per million (PPM) products or parts made. Working to this standard raised the performance bar for western manufacturers, used to relatively lax AOQL tolerances, making quality sampling plans irrelevant.


The mathematics of Six Sigma were derived by de Moivre in 1733 and later developed by Gauss who also studied the Normal Distribution, represented by the Bell Curve, and defined the value of the mean and standard deviation which he called σ sigma. It is a characteristic of the normal probability distribution, also called the Gaussian Distribution that 99,99966% of all occurrences fall within plus or minus three standard deviations (3σ) from the mean, or that they have a spread of six standard deviations (6σ). The normal distribution can represent the probabilities of occurrences of random errors, or the spread of characteristics of certain populations. In manufacturing it can represent the frequency of occurrence of a characteristic, such as a dimension, a resistance value or a temperature, compared with its deviation from the norm or desired value, in other words the tolerance spread of the manufacturing output. See diagram of the Normal Distribution.


There's no magic to six sigma. The tolerance of the desired characteristic is set by the requirements of the design or the performance requirements of the product. Sigma is a measure of the variability of the output. Six sigma manufacturing simply means that the production process should be designed and controlled such that the six sigma spread of the desired output characteristics should be contained within the desired tolerance limits for the characteristic. The challenge comes in finding ways to achieve the reduced variability, or standard deviation, of the output.


1986 Starting in 1986, communications engineers Irwin Mark Jacobs and Klein Gilhousen together with Italian born mathematician, academic and communications engineer Andrew James Viterbi and their team of engineers working at Qualcomm, the company they had founded the previous year, applied for a stream of patents which covered the essential architecture and building blocks of code division multiple access (CDMA) systems used in mobile communications applications. CDMA technology uses autocorrelation detection, pattern matching principles and spread spectrum technology first used in Radar systems in the 1950s and provides superior noise performance, security and frequency spectrum utilisation. The signals are modulated by a pseudorandom noise sequence to spread the signal bandwidth in a convolution modulator. Demodulation is based on the Viterbi algorithm which Viterbi invented, but did not patent, in 1967.


1986 The first successful organic thin film transistor (OTFT) was produced by A. Tsumura, H. Koezuka and T. Ando working at Mitsubishi Electronic Devices Lab in Japan. It was a normally "off" field effect device in which the source (drain) current could be modulated by a factor of 102 to 103 by varying the gate voltage. In contrast to conventional inorganic semiconductors made from costly single crystals, organic semiconductors could be deposited on very large areas of low cost polymer film by comparatively simple evaporation or coatings from solutions opening the door to a host of new applications.


See a diagram of OTFT Construction.


1987 The first practical organic light emitting diode (OLED) was produced by Hong Kong-born Ching Tang and American Steven van Slyke, working at Kodak's Eastman research labs. Compared with Pope and Kallmann's 1960 experimental devices, they used multilayer stacks of extremely thin evaporated organic layers to achieve an order of magnitude improvement in efficiency and a corresponding reduction in the supply voltage to 10 Volts.


Because modern OLEDs are constructed from layers of thin films sandwiched together, they are flexible and can be rolled or bent into curved shapes and since it is relatively easy to produce the organic films in large areas, OLEDs are particulaly useful for making high definition TV and computer displays. The screens are made up from a grid of thousands of tiny triads of red, green and blue OLEDs, each triad forming a pixel which can be switched "on" and "off" independently while the electro-optic principle of operation provides micro second response times to produce moving displays. Unlike LCD screens, OLED screens don't require a backlight, which enables them to reproduce deep black colours and outstanding levels of contrast.


See more about OLEDs and How they work.


1988 40MWH Lead Acid load levelling battery delivering 5000 Amps at 2000 Volts (10 MW) for 4 hours installed by Southern California Edison (SCE) at Chino in California.


1988 Electrical engineering professor Richard S Muller with colleagues Fan Long-Shen and Tai Yu-Chong at the University of California, Berkeley proposed a design for an electrostatic micro-scale motor fabricated from silicon. The following year they succeeded in producing the world's first operating Micro-Electromechanical Systems (MEMS)micro-motor. It was 100 microns across, or about the width of a human hair and was the first successful implementation of silicon micromachining technology first proposed in 1982 by Petersen.


1988 Albert Fert of the University of Paris-Sud and Peter Grünberg of the KFA research institute in Julich, Germany independently discovered that they could obtain a magnetoresistive effect many times greater than the previously known AMR (anisotropic magnetoresistance) effect discovered 130 years earlier by Kelvin. They consequently named it "giant magnetoresistance" or GMR.

The GMR device is constructed from an alternate stack of ferromagnetic (Fe, Co, Ni, and their alloys) and non-ferromagnetic (Cr, Cu, Ru, etc.) metallic layers each only a few atomic layers thick.

GMR replaced the AMR technology previously used for read-heads in magnetic disks and now also finds use in current sensors where it has better sensitivity and output signal level than Hall effect devices.

A GMR current sensor works as follows. A conductive non magnetic layer which carries the sensor current through the device is sandwiched between two layers of ferromagnetic materials whose magnetic moments face in opposite directions due to ferromagnetic coupling but at right angles to the sensor current path. The current to be measured flows through an external conductor which creates a local magnetic field into which the sensor is placed. When no current flows in the external circuit, no external magnetic field is present and the resistance of the sensor's non magnetic layer is very high so that the sensor current is very low. When an external magnetic field is present, due to current to be measured, the magnetic moments of the ferromagnetic materials both line up in the same direction as the external field, in parallel with the sensor current. This causes the resistance of the sensor current path to drop dramatically by as much as 50%.


1988 Italian engineer Leonardo Chiariglione established the ISO standardization activity known as The Moving Picture Experts Group (MPEG) with the mandate to develop international standards for compression, decompression, processing, and coded representation of moving pictures, audio, and their combination, in order to satisfy a wide variety of applications. Membership was drawn from experts in over 200 companies and research establishments world wide. It was an essential step in encouraging the sharing of information about compression techniques which led to the development of hardware and software products which enabled the transmission of audio and video files over the limited bandwidth links available on the Internet, greatly increasing its utility.


See also Key Internet technologies


1988 Through mathematical analysis, Bellcore electrical engineer Joseph W. Lechleider proved the feasibility of sending broadband signals down the standard twisted pair Copper wires used to connect domestic subscribers to their local telephone exchange and demonstrated it at Bell Labs.

For generations the bandwidth of signals carried on the network of twisted pair Copper wires known as POTS (the plain old telephone system) had been limited to frequencies between 300 Hz and 3.4 kHz. This restriction however was not due to limitations of the Copper wires themselves but to the specifications of the equipment connected to the lines such as multiplexers, switches, amplifiers and the telephone instrument itself. Lechleider showed that the existing world wide installed network of telephone lines could be adapted to carry broadband signals without incurring major new infrastructure investment. The technology was named as Digital Subscriber Line (DSL).

The name however is a misnomer since, although a DSL circuit provides a digital service, it actually uses analogue transmission by modulating sinusoidal carrier waves, just as in a modem except at higher frequencies. This is because modulated sinewave carriers occupy less bandwidth than the corresponding digital signals. Carrier frequencies in DSL modems sit above the frequency band of the regular telephone service ranging from 4 kHz to as high as 4 MHz enabling the DSL and telephone services to coexist on the same Copper pair facility.

The DSL system provided a bandwidth of 8 Mbps or more, sufficient for the transmission of large data files, multi-coloured graphics, high definition images, music and video over the Internet. When DSL systems were eventually rolled out they were an instant success since the World Wide Web was by then becoming known as the "World Wide Wait" because the bandwidth limitations of the network and subscriber connections slowed down the access to the new facilities offered by the Web. See WWW below.


The actual implementation of the technology was known as ADSL (Asymmetrical Digital Subscriber Line) since it was designed with download speeds of 8 Mbps while upload speeds were only 1 Mbps to mirror the way most users used the Internet - downloading much more information than they ever uploaded. By limiting the upload speeds the overall system noise performance could be improved and this could be traded off to allow potentially higher download speeds.

In early implementations of ADSL, known as the carrierless amplitude/phase (CAP) version, the available bandwidth is divided into three distinct bands, widely separated to minimise the possibility of interference between channels: 0 to 4 KHz band for voice signals, 25 kHz to 160 kHz for upstream signals and upwards of 240 KHz for downstream signals. The existence of loading coils (Pupin coils) installed on existing Copper lines to limit distortion of the telephone signals unfortunately puts an upper limit on the bandwidth capability of the line and also limits the possible distance of the DSL subscriber from the exchange to about 3 miles (5000 metres) since the signal quality and connection speed decrease as distance from the exchange increases. Local line speeds are also reduced due to congestion as more users attempt to send data simultaneously through the same exchange.


The DSL system was further improved by John Cioffi, a professor at Standard University's Department of Electrical Engineering, who in 1993 developed discrete multitone (DMT), a version of orthogonal frequency-division multiplexing (OFDM). DMT (and OFDM) is a method of separating the DSL bandwidth into 249 separate 4 KHz frequency bands or channels, and assigning a virtual modem to each channel. Each channel is monitored and, if the quality becomes impaired, the signal is shifted to another channel so that the best available channels are always used for the transmission. Although DMT is more complex to implement than CAP it gives it more flexibility and better overall system performance on lines of variable quality.


See also Key Internet technologies


1989 British communications and computer engineer Timothy John Berners-Lee, working at CERN the European Particle Physics Laboratory in Geneva, invented the first practical system for global information sharing based on hypertext which could use the Internet as a communications medium. The concept of hypertext was not new but Berners-Lee's proposal was the first to include all the tools necessary for implementing a working system which he later called the World Wide Web (WWW).

He defined the language HTML (HyperText Mark-up Language) for specifying information content, document layouts and links to other sites, URLs (Universal Resource Locators) to identify the location of each web page and HTTP (HyperText Transfer Protocol), the set of rules for linking to pages on the Web. The following year he wrote the first browser, a text based method for retrieving and displaying the documents.

Berners-Lee's concept for the World Wide Web was made available royalty free to Internet users in 1991. His invention made it easy to store and retrieve information in an agreed common format and greatly simplified access to the Internet, taking it out of the universities and making it available to the public at large, not just to a few technical specialists.


The usage of the Web accelerated even more in 1993 when students at the US NCSA (National Center for Supercomputer Applications), Marc Andreessen aided by Eric Bina, introduced NCSA Mosaic, the first user friendly browser with a Graphic User Interface (GUI) and support for sound, video clips, forms, bookmarks, and history files.


When asked why he did not profit from the enormous potential of the WWW by patenting his ideas, Berners-Lee, who is committed to a global open system, commented - "Then there would have been a lot of little webs"


See also Key Internet technologies


1989 Martin Fleischmann of Southampton University and B. Stanley Pons of the University of Utah announced they had achieved Cold Nuclear Fusion using a beaker of heavy water containing two metal electrodes - one of Platinum and one of Palladium. It promised an unlimited source of cheap energy from a small portable power unit. Essentially an electrolysis system which produces more energy, in the form of heat, than it consumes, it was greeted with skepticism by the scientific community. Millions of pounds of research money were subsequently ploughed into further investigations in many countries of the world spawning over 3000 technical papers in the 1990s but despite the enormous investments and the continuing world-wide research effort, many researchers have been unable to replicate Pons and Fleischmann's results although some claimed to have succeeded. In 1993 technology licensing rights for the cold fusion system were sold by Utah University and in 1996 US patents on the technology were granted to a Dr James Patterson (see below). However, to this date, no practical cold fusion energy source has been produced as a result of all this activity. And incidentally, no energy has been produced by hot nuclear fusion either, despite heating suitable plasmas to 510 million degrees Celsius!!


Unfortunately there is no requirement to demonstrate a working model in order to receive a patent and people can apply for patents for things that don't yet work as many gullible investors have found to their cost.

Cold fusion has since entered the history books as a bad joke about bad science.


1989 US patent awarded to American physicist Paul M Brown for a Betavoltaic battery which provides direct conversion of nuclear energy into electricity. (Betavoltaic battery doesn't sound nearly as threatening as Nuclear battery, does it?).

Nuclear batteries were first demonstrated by Ohmart in 1951. During the 1950s nuclear batteries were developed by the US DOE and been in use by NASA since 1961. They were designed to meet the long life, high-voltage, high-current draw requirements of electrically powered space probes and satellites however the batteries used by NASA mainly use thermocouples to generate electricity indirectly by using the heat, rather than the nuclear radiation, emitted by radioactive Plutonium-238. In 1971 a patent was awarded to Sampson for the Gamma Electric Cell which converted the nuclear energy directly into electrical energy. The betavoltaic power cell is a similar device which captures the radiated energy in a semiconductor rather than in a dielectric material as in Sampson's cell. It contains mildly radioactive isotopes such as Tritium, an isotope of Hydrogen (Hydrogen-3), which emit only beta particles (electrons) as they decay and a semiconductor material which catches the beta particles as they are given off. The impact of the beta electron on the semiconductor P/N junction material causes a useable electric current to flow across the junction in some respects similar to a photovoltaic (solar) cell.

It is claimed that the cells can produce high voltages in the order KV but the energy density is low at only 24 W/Kg. The power output is therefore low, only tens of Watts and the technology is only suitable for low power applications. Tritium has a half life of 12.5 years and the useful battery life is thus claimed to be about 25 years. The cells never need recharging.

Betavoltaics came into the public eye when the public was already jaundiced by the Cold Fusion scandal. Concerns have been expressed about the technical feasibility of the conversion process and the use of radioactive materials in consumer products as well as the shielding and containment they might require. (Small amounts of radioactive isotope Americium-241 are in fact already used in consumer smoke detectors while other radioisotopes are used in a variety of medical, industrial and agricultural applications)


Currently (2004) low power hybrid betavoltaic batteries are being developed for use in mobile phones and laptop computers. Because the radiation source is not susceptible to conventional controls on the level of energy emitted, the betavoltaic cell in effect acts as a charger which provides a constant trickle charge to a standard Lithium-Ion battery. The fundamental concept of this controversial device however still remains unproven and no products have yet reached the market.


During the development Brown was subject to considerable ridicule and harassment including death threats. He was killed in a motor car accident in 2001 at the age of 47.


1989 Britain's National Power company starts work on a load levelling battery employing Regenesys - Flow Battery - technology. Initial project for TVA is a 12MW, 120MWh battery. (See Regenesys 2003 below)


1989 Development programme for thin film batteries led by John Bates, initiated at the US Oak Ridge National Laboratory (ORNL) in Tennessee. Batteries are built up from cell components which are printed in layers on to ceramic and other substrates using techniques originally pioneered in 1941 with thick film circuits.


1989 Engineers at Boeing claim to have achieved photovoltaic cells with a 37% conversion efficiency by stacking two layers of semiconductor material each optimised for a different wavelength (red and blue light). Very little has been heard of these high efficiency cells since then.


1989 The first gateways between private email carriers and the Internet were opened following a decision by the US Federal Networking Council the previous year. This decision allowed commercial traffic to be carried on the Internet opening the door to its commercial exploitation. Access was provided by Internet Service Providers (ISPs) and the first "killer app" was commercial email service.

.

1990 Alan Emtage, a student at McGill University in Montreal created Archie, the Internet's first search engine. This was before the World Wide Web had taken off and his system consisted of a searchable database of accessible files on FTP sites. At the time, using the File Transfer Protocol (FTP) was the main way of accessing files on other computers. There was still no way of searching the contents of the files.


The first search engine which enabled the file contents to be indexed was ALIWEB (Archie Like Indexing for the WEB) developed by Martijn Koster and presented at the first WWW Conference in Geneva 1994. It allowed users to submit web pages, page descriptions and keywords for indexing, but since few users provided the information, ALIWEB was not widely used.


The first web robot or "bot", the World Wide Web Wanderer was created in 1993 by Matthew Gray at MIT. It was a software application that was programmed to visit Internet servers, listing their files and creating a corresponding index. Gray's index was called Wandex. The process was called web crawling or spidering.

Wandex had the limited goal of measuring the size of the Web.


Later in 1993 Jonathan Fletcher, working at Stirling University in Scotland, launched JumpStation, the first search engine to use the three key functions of crawling, indexing and searching. It used a web robot to visit servers and crawl through (spider) their files, building an index of their pages as in Wandex, but also allowing this index to be searched using the page titles and headings as keywords. The robot however did not enter into the pages to allow keyword searches of the page content.

It managed to build a substantial database but unfortunately it failed to get any backers, not even the university, and was wound up when Fletcher left the university later that year.

JumpStation was followed in 1994 by the full function WebCrawler, created by Brian Pinkerton at the University of Washington, which also indexed the web page contents enabling keyword searches of the contents.


Thereafter came many similar new search engines and directories which allowed users to submit their pages for indexing until the arrival of Google, developed in 1997 by graduate students at Stanford University, Larry Page and Russian, Sergey Brin. Google used a proprietary page ranking system which provided more relevant search results. It kept the ranking algorithms secret to prevent spamming the index and it was very fast. As its popularity grew it was able to increase the number of indexed pages increasing in turn the breadth and depth of its database. A virtuous circle which has made Google the unchallenged search leader.


See also Key Internet technologies


1990 Commercialisation of the NiMH battery after a relatively short period of development of only four years help[ed by the fact that the new NiMH cells could be made using the same equipment what had been used to manufacture NiCad cells.


1990 The first volume introduction of Lithium secondary cells for consumer applications after over ten years of development.


1990's New battery technologies enable the development of cordless and portable devices (power tools, mobile phones, lap-top computers, PDAs, digital cameras, personal care items) and consequently boost demand for batteries. Increased volumes bring prices down, reinforcing demand.


1991 Carbon nanotubes or Buckytubes discovered by the Japanese electron microscopist Sumio Iijima who was studying the material deposited on the cathode during the arc-evaporation synthesis of fullerenes. Buckytubes can exhibit either semiconducting or metallic properties. They also have the intrinsic characteristics desired in nanomaterials used as electrodes in batteries and capacitors, a tremendously high surface area (~1000 m2/g), good electrical conductivity, and very importantly, their linear geometry makes their surface highly accessible to the electrolyte. Buckytubes have the highest reversible capacity of any Carbon material for use in Lithium-Ion batteries.


1991 Swiss scientist Michael Grätzel and co-workers at the Swiss Federal Institute of Technology patent the Grätzel solar cell a regenerative battery depending for its operation on a photoelelectrochemical process similar to photosynthesis.


1991 American computer software engineer Philip R. Zimmermann published his Pretty Good Privacy (PGP) encryption algorithm. Because the RSA public key encryption algorithm is very slow to implement, it was not very convenient for encrypting very large messages. PGP used the RSA public key to encrypt a randomly generated symmetrical key with a faster encryption algorithm which was then used to encrypt the message. Zimmermann released PGP free of charge to the public but fell foul of the U.S. Government who regarded his encryption algorithm as a military weapon, useful to the enemy. Starting in 1993 he was subjected to a prolonged criminal investigation and threatened by the US Government for illegal "munitions export without a license". However he circumvented this by publishing the source code of PGP in a book via MIT Press which was widely distributed. While the export of military software was banned, no such regulation applied to books which were consided as free speech. After a public campaign led by early internet users and supporters throughout the world, the investigation was eventually closed in 1996 and no charges were brought. PGP encryption is now widely used worldwide.


1992 Austrian born Karl Kordesch of Canada patents the reusable alkaline battery the so called (RAM) Rechargeable Alkaline Manganese battery. Kordesch holds 150 patents on battery and fuel cell technology.


1993 John Cooper working at the Lawrence Livermore Labs patents the Zinc Air refuelable battery, using a cell chemistry first demonstrated by Heise and Schumacher in 1932. The battery is charged with an alkaline electrolyte and Zinc pellets which are consumed in the process to form Zinc oxide and Potassium zincate. Refueling takes about 10 minutes and involves draining and replacing the spent electrolyte and adding a new charge of Zinc pellets. This short refueling times possible with mechanical charging are particularly attractive for EV applications. The spent electrolyte is recycled.


1994 Bellcore patent on Plastic Lithium Ion (PLI) technology granted. Lithium polymer cells with a solid polymer electrolyte. The solid state battery.


1994 Industry consortium set up by Mercedes Benz and MIT in the USA to define new set of automotive industry battery standards to address the problem of increasing demand for on-board electrical power. Currently there are over 50 industry members and the result has been the establishment of the PowerNet 42V standard based on a 36 Volt operating / 42 Volt charging battery. The operating voltage was chosen because it could conveniently be provided by three standard 12 Volt battery modules. Applications using this standard have been slow to materialise.


1994 The Bluetooth wireless technology standard used for exchanging data over short distances between fixed and mobile devices such as computers, mobile phones and loudspeakers was launched by the Ericsson, the Swedish telecoms company. Originally intended for communicating with wireless headsets, the technology was initiated by engineers Nils Rydbeck and Johan Ullman and brought to fruition by their design team. It operates in the UHF Band with a frequency between 2.4 to 2.485 GHz providing a personal, local network with multiple channel interconnectivity for two to eight connected devices, or users, over a range of 10 to 100 metres, depending on the transmitter power. Once a network is established, one device takes the role of the master while all the other devices act as slaves. The communications protocol is based on frequency-hopping spread spectrum technology.

(See also USB hard wired connections below.)


The system provides a single communications standard for multiple users, avoiding the proliferation of different or incompatible communications protocols. The "Bluetooth" name was proposed in 1997 by Jim Kardach after tenth-century Danish King Harald Blåtand Gormsen, nicknamed Harald Bluetooth because of a bad tooth, who united Norwegian and Danish tribes into a single kingdom with a single language.


1995 In 1994 a group of seven companies,(Compaq, DEC, IBM, Intel, Microsoft, NEC, and Nortel) began a joint project to develop the Universal Serial Bus (USB), a standard hardware and software protocol for connecting many different types of hardware devices to a computer. It was designed to replace the numerous standard and proprietary serial and parallel data connections then being used between computers and peripheral devices and to accommodate many different types of hardware devices. It was also designed to simplify software configuration of all devices connected to the USB, as well as permitting greater data rates for external devices.

A team led by Indian-American computer architect, Ajay V. Bhatt, worked on the standard at Intel and produced the first integrated circuits supporting the USB the following year (1995).


The most common standards used in the personal computer industry at the time were the RS232 serial communications bus, used for connecting to computer terminals and modems, and the Centronics parallel bus used for connecting to printers. (See below) but there were others carrying video and other signals. Currently the USB protocol is the standard most used for the hard wired connections linking computers, laptops and tablets to printers, digital cameras, scanners, flash drives, mass storage devices, mobile phones, iPods, MP3 players, keyboards, mice and joysticks.

(See also Bluetooth wireless connections above.)


The USB uses a star topology and was designed to be "hot pluggable" as well as "plug and play" with a standard 4 pin connector incorporating 2 pins connected to a twisted pair for carrying a differential data signal, a ground (earth) line and a 5 Volt power rail. The data rate specified in the original version was 1.5 Mbits/sec but this has increased with subsequent versions up to 3 Gigabits/sec. Up to 127 devices including hubs may be connected to the bus. Cable lengths are limited to 5 metres (16 feet).

The host transmits data packets to, or receives data packets from, all the devices connected to it, but each device has a unique address so that only one device can actually receive or transmit data at any one time.


  • RS232 Serial Interface

    The RS232 protocol was first introduced in 1962 by the Radio Sector of the 'Electronic Industries Alliance" (EIA) standards body. The RS232 is a single channel, serial connection which transmits data one bit at a time down a pair of wires and a separate pair of wires is needed to pass data in the other direction (duplex operation). The data rate was typically 20 kilobits per second (kbps). A common ground between the PC and the associated device is necessary. Hot-plug is not supported, but sometimes accommodated. Originally intended for use with computer terminals and modems, several versions were agreed including 9 pin and 25 pin connections. For connecting personal computers to modems the 9 pin connection was most used.

     

  • Centronics Parallel Interface later adopted as IEEE 1284
  • The Centronics interface was originally developed in the 1970s by the Centronics printer company for connecting computers to their printers. It is a single channel, eight bit parallel connection which transmits eight bits (one word) of data simultaneously down eight parallel data wires. It had an unusual connecting cable which plugged into to a into a 25 pin parallel port on the computer and a 36 pin male and female connector at the printer or other device. The other lines are used to read status information and send control signals. Data flowed in one direction only from the computer to the printer or other device. Data rates were typically 1 megaByte per second (1 mBps). Later versions were adapted to permit bi-directional data flow.


See also the Wireless Internet - WiFi


1995 Introduction of the pouch cell made possible by Lithium PLI technology.


1995 Duracell and Intel developed the Smart Battery system for Intelligent Batteries and proposed the specification with its associated SMBus as an industry standard.


1995 On-cell battery condition indicator or fuel gauge for consumer primary cells introduced by Energizer.


1995 English stuntman, swimming professional and inventor Trevor Bayliss devised a method of producing a practical long lasting supply of electricity from a wind up spring. Using springs to generate electricity is nothing new, but prior to Bayliss' invention, the energy tended to be produced for only a short duration. Bayliss devised a clockwork battery by connecting the spring through a gear box which released the energy slowly to a dynamo.


1995 BMW abandons flywheel energy storage after a test technician is killed and two others injured when the containment enclosure weighing 2000Kg failed to protect them from shrapnel when a high speed rotor failed. (See Flywheels)


1996 Dr James Patterson, a 74 year old American chemical engineer and inventor was awarded the first of eleven US patents on clean energy "Patterson Power Cells". Although his company Clean Energy Technologies Inc. (CETI) claims they are not based on cold fusion they use the discredited Pons - Fleischmann method of heating water by electrolysis, regarded by some skeptical scientists as a perpetual motion machine. So far, these amazing energy producing electrolysis cells have failed to displace batteries from the market.


1996 Researchers Theodore O. Poehler and Peter C. Searson at The Johns Hopkins University demonstrated an all-plastic battery, using doped polymer, polypyrrole (five membered ring structured organic molecule, capable of redox reactions) composite electrodes in place of the conventional electrode materials as well as conducting and insulating polymers for the electrolyte and the casing. The composite electrodes are made from polypyrrole-Carbon fibre in which the carbon fibres act as an electrically conductive skeletal electrode for current collection. The battery generates 2.5 Volts, it is flexible and operates over a wide temperature range with a long cycle life, it can be made as thin as a credit card and is not detectible by conventional airport security devices. Despite claims that the cells are inexpensive and easy to manufacture, products using the technology have so far not appeared in the consumer marketplace.


1996 Solar-powered aeroplane, the Icare, flew over Germany. Developed by Brauschweig University and the Stuttgart Academic Flying Group, it covered a distance of 350 kilometres, during a five hour flight. The wings and tail surfaces of the Icare are covered by 3000 super-efficient PV cells, with a total area of 21 m2


1997 After over seven years of deliberations and reviews of competing technologies the IEEE, under the leadership of Dutch engineer Vic Hayes, a research fellow at the Technical University of Delft, finally published its standards, known as IEEE 802.11, for wireless LANs (Local Area Networks). After the initial announcement, two more versions were soon ratified, 802.11b which operates in the industrial, medical and scientific (ISM) band of 2.4 GHz using direct sequence spread spectrum (DSSS) modulation with a raw data rate of 11 Mbit/s and 802.11a which operates in the bands of 5.3 GHz and 5.8 GHz using orthogonal frequency division multiplexing (OFDM) with a maximum data rate of 54 Mbit/s.


Meanwhile the industry and users had not been idle and the same year, after three years of development led by professor Alex Hills, the Carnegie Mellon University launched the first major campus wide wireless LAN. Known as "Wireless Andrew", it had 73 access points and operated on a frequency of 900 MHz with a raw data rate of 2 Mbit/s.


With similar functionality to the Ethernet, the 802.11 standard became known as the "Wireless Ethernet" or simply Wi-Fi.

But different from the Ethernet protocol, the Wi-Fi access control system uses carrier sense multiple access/collision avoidance (CSMA/CA) rather than the Ethernet's carrier sense multiple access/collision detection (CSMA/CD). This is because it is difficult to detect collisions over a wireless medium. Once the radio transceiver begins a transmission, the effective signal strength of its own transmission is so much greater than that of any other remote transceiver that a collision cannot be detected. The 802.11 standard therefore specifies a collision avoidance system in which a transceiver must wait until it hears no other transmissions before it is allowed to transmit. It then waits for an additional randomly chosen amount of time, and, provided that it hears no new transmissions, it can begin its own transmission.


The publication of the wireless LAN, or Wi-Fi, standards took some of the technology risk out of developing wireless communications thus creating an enormous new market for wireless products with Ethernet-quality data rates providing access to the Internet, replacing the much slower modems, and for linking PCs to the telephone network, to each other and to peripheral devices such as printers, scanners and memory storage.

Since 1997 several variants of the 802.11 "standards" have been published. At the last count there were 29! and it's still called a standard.


See also Key Internet Technologies


1997 Annual shipments of photovoltaic modules reach 120 megawatts world wide.


1998 48MWH Sodium Sulphur load levelling battery delivering 6 MW for 8 hours installed by NGK for Tokyo Electric Power Company (TEPCO).


1998 CODATA readjusts the value of 300 physical constants reducing uncertainties to 20% or less of what they had been previously, except for the gravitational constant which was determined to be even more uncertain than previously measured.


2000 Dr Randell Lee Mills, a Harvard-trained medical doctor and chemist who also studied biotechnology and electrical engineering at MIT, proposed the Hydrino Hydride Battery for which he claimed a theoretical energy density of 10,000 Wh/Kg and a cell voltage of 70 Volts. This compares with 200 Wh/Kg and a cell voltage of 3.6 Volts for a Lithium-Ion battery. Its operation depends on a new phenomenon of quantum physics which Mills identified. Using a catalyst the orbiting electrons in a Hydrogen atom can be persuaded to enter lower orbits (lower quantisation levels) than previously thought possible, forming new atoms which he called hydrinos and giving off huge amounts of energy in the form of ultra-violet light in the process. The hydrinos produced by this process have unique physical and chemical properties which make many new applications possible.


After the recent cold fusion débâcle you would think the scientific and investment community would be more cautious, yet Mills assembled a management board consisting of prominent captains of industry from the nuclear power and energy utilities, government advisors, academics and high ranking military officers all with experience in nuclear power and all with impeccable credentials. At the same time he raised $25 million of capital from electric utilities and venture funds and lined up Morgan Stanley Dean Witter to arrange an IPO (Initial Public Offering) for his company BlackLight Power Inc (BLP). Mills filed for several patents for his inventions and in 2000 he was granted a US patent for "Low Energy Hydrogen Methods and Structure" detailing the fundamental theory of his invention and 499 novell aspects of his work. Two days later, prompted by an outside inquiry, the patent office became concerned that the hydrino concept was "contrary to the known laws of physics and chemistry" and rejected the other patents which were in the pipeline including "Hydride Compounds" which had already been assigned a number and for which BLP had already paid the approval fee. BlackLight appealed, taking the issue to court but the appeal was rejected. There's still no word on the IPO and the scientific community hasn't yet produced any more hydrinos.


2000 Indian chemist Sukant K. Tripathy working at the University of Massachusetts demonstrates polymer photovoltaic cells for making flexible solar panels using nanotechnology. Unfortunately he did not live to see his dream of bringing low cost solar power to his native India since he was drowned at the age of 48 shortly after making the announcement however his technology was quickly adopted and is being commercialised by Nobel laureate Alan Heeger. Possible developments being pursued are photovoltaic fibres which can be woven into fabrics, particularly for military applications including tents and uniforms with the object of reducing the weight of batteries the soldier must carry around to power his electronic equipment. Ideal for the Indian army perhaps?


2000's Trends

  • Environmental concerns and legislation creating a demand for "greener" energy which can be satisfied by wind, wave and solar power all of which use batteries for high power load levelling.
  • The same drivers are also creating a demand for cleaner more efficient vehicles for which battery power is a cost effective solution.
  • Smaller, lighter, lower cost batteries make electric and hybrid electric vehicles practical for the first time.
  • Increasing use of electronics to get the best performance out of the battery.
  • Cell manufacturing being concentrated in Asia with China taking a progressively higher share. Battery customisation remains close to the customer.
  • Lithium technologies taking an increasing share of the market.
  • Fuel cells used to power a variety of demonstrator vehicles. Still too expensive and complex for general adoption.

2001 John Smalley working at the U.S. Department of Energy's Brookhaven National Lab announced the development of nanowires, organic molecules called oligophenylenevinylene (OPV). These molecules are essentially "chains" of repeating links made up of Carbon and Hydrogen atoms arranged to promote strong, long-range electronic interactions through these molecules. They allow a very fast rate of electron transfer down the chain acting as extremely fine, low resistance wires only one molecule in diameter.


2002 Various patents filed on nanomaterials used in Lithium and other batteries to achieve increases in charge and discharge rates of 10 to 100 times.


2002 Commercialisation of solid state Lithium polymer thin film batteries based on patents from ORNL.


2003 Russian born researchers Andre Geim and Kostya Novoselov working at Manchester University isolated Graphene, the world's first 2D material, the thinnest and strongest material ever discovered. It was well known that Graphite's slippery, lubricating properties were due to very thin layers of graphite sliding over eachother and that the possibility that one atom thick, two-dimensional crystal graphene sheets existed, but no-one had worked out how to extract them from the graphite.

Experimenting with a lump of bulk graphite, they removed some flakes from the material with common transparent adhesive tape (Scotch tape/Sellotape), and noticed that some flakes were thinner than others. By repeatedly separating the graphite fragments they managed to create flakes which were just one atom thick. They had isolated graphene, an allotrope of Carbon, for the first time and soon established that this new material had surprising, electrical, mechanical and thermal properties.

They published their initial findings in 2003 but their paper was rejected twice as it was thought so unlikely that one-atom-thick sheets could be stable. Eventually their paper was published in the journal "Science" in 2004. Since then there has been an explosion of interest as laboratories throughout the world have raced to investigate and develop a myriad of potential applications to exploit the properties of this wondrous material.


In 2010 Geim and Novoselov were awarded the 2010 Nobel Prize in Physics for their pioneering research on graphene.


See more about the properties and applications of graphene.


2003 Teeters, Korzhova and Fisher working at the University of Tulsa in the USA patent the nanobattery so small that they can fit 60 of them across the width of a human hair. Through nanotechnology, objects are built in such a way that nearly every atom is precisely placed. Such a tiny battery could be used to drive a microbe-sized submarine through a patient's blood vessels.


2003 University of California, Berkeley, physics professor Alex Zettl created the first nano-scale motor, 15 years after Berkeley, engineers built the first micro-scale motor. The smallest motor made to date it is about 500 nanometers across, 300 times smaller than the diameter of a human hair, small enough to ride on the back of a virus. The first example of nano-electromechanical systems (NEMS).


2003 University of California, San Diego, announced that they are developing something they call "smart dust." These are tiny robots, smaller than a grain of sand, powered by nano batteries, that could move through an artery, or through the air, or through contaminated water, to carry medication or sniff out hazardous materials.


2003 RWE the German multi utilities group, the new owners of UK National Power (now renamed to Innogy) pulled the plug on the Regenesys Flow Battery project before the battery was completed after it spent $250 Million over 14 years.


2003 The world's biggest battery was connected to provide emergency power to Fairbanks, Alaska's second-largest city. Without power lines between Alaska and the rest of the U.S., the state is an "electrical island." The $35 million rechargeable battery contains 13,760 large Nickel-Cadmium cells in 4 strings weighing a total of 1,300 tonnes and covering 2,000 square metres. The battery can provide 40 megawatts of power for up to seven minutes while diesel backup generators are started.


2003 Worldwide battery sales

  • Total world sales value $48 billion.
  • Sales value of small rechargeable batteries - $7.6 billion.
  • More than 110 million automotive lead acid batteries were manufactured for more than 650 million vehicles on the world's roads. 81% of sales were to the replacement market.
  • Sales value of industrial batteries for traction and standby power applications - $14 billion
  • 500,000 electric bicycles per year sold in China.
  • Unit sales of light electric vehicles (Bicycles, scooters, motorcycles and city runabouts) expected to be 10 million in 2004.
  • The HEV/EV battery market is expected to grow at an AAGR of more than 50% to nearly $250 million in 2008.
  • Total battery demand expected to exceed $60 billion by 2006 and $65 billion by 2008.

2003 Finnish metallurgist Rainer Partanen patents the rechargeable aluminium air battery using nanotechnology to achieve very high energy densities.


2004 Toshiba demonstrated a direct methanol fuel cell (DMFC) small enough to power mobile phones. The fuel cell provides an output of 100mW from a cell measuring 22x56x4.5mm. A single charge of 2cc of methanol will power an MP3 player for 20 hours.


2004 The transistor count on an a single Intel Itanium2 microprocessor chip was over 410 Million and the next generation is expected to exceed 1 Billion in 2005. It has a 128 bit system bus and an I/O bandwidth of 6.4 GB/sec. See also Moore's Law and 1952 transistor production volume and Intel 4004 microprocessor.


2005 Korean bioengineer Ki Bang Lee working at Singapore's Institute of Bioengineering and Nanotechnology, developed a paper battery powered by urine for use as a simple, cheap and disposable power source for home health tests for diabetes and other ailments. It is composed of paper, soaked in Copper chloride, sandwiched between layers of Magnesium and Copper and laminated in plastic. The test kit including the battery is about half the size of a credit card, 6cm by 3cm and 1mm thick. Typically the battery will provide around 1.5 Volts, with a maximum power output of 1.5 milliWatts with 0.2 millilitres of urine. A range of medical test kits incorporating biosensors or biochips is envisaged which use the body fluid being tested as the source of power and a variety of geometries and materials depending on the requirements of the test.


2005 Masaharu Satoh working at NEC in Japan reveals details of a high C rate Organic Radical Battery (ORB). This is a low capacity battery which runs for only a short period but can be charged and discharged at 100C. It is small and light but delivers very high power for a short period making it ideal for UPS applications, particularly for laptop computers. According to NEC, a 1 WattHour battery can deliver 100 Watts and can be recharged in less than one minute. It uses a graphite cathode coated with a specially developed polymer material (2,2,6,6-tetramethylpiperidinoxy-4-yl methacrylate PTMA - the organic radical) which freely donates electrons to achieve the high current carrying capacity.


2005 Fraser Armstrong working at Oxford University demonstrated the prototype of a biofuel cell which uses as fuel the small amounts of free hydrogen available in the atmosphere and an enzyme to promote oxidation, rather than an expensive catalyst. It doesn't use a membrane to separate the reactants and is unaffected by Carbon monoxide which poisons typical catalysts. Development is continuing.


2005 Chris van Hoof working at IMEC the Inter-university MicroElectronics Center at Leuven in Belgium demonstrated the latest version of his thermo-electric generator powered by body heat. Designed to be worn on the wrist, it uses 3500 Bismuth Telluride thermocouples generating a total of between 200 µWatts and 500 µWatts at up to 1.5 Volts intended for powering medical sensors.


2006 Researchers at MIT's Laboratory for Electromagnetic and Electronic Systems (LEES), John Kassakian, Joel Schindall and Riccardo Signorelli succeeded in growing straight single wall nanotubes (SWNT) with diameters varying from 0.7 to 2 nanometers and lengths of several tens of microns (one thirty-thousandth the diameter of a human hair and 100,000 times as long as they are wide) which they used to make enhanced double layer capacitors with major performance improvements.


2007 Apple launched the iPhone, a revolutionary, Internet capable smartphone. It was the brainchild of Steve Jobs, CEO at Apple and Jonathan Paul "Jony" Ive, a British industrial designer who led the design team which made it a reality. It was a multi-function device which brought computing capability and Internet connectivity to the mobile phone. A touchscreen with a virtual keyboard enabled communication with the device avoiding the need for a keyboard and mouse. Besides standard mobile phone functionality this platform supported a huge range of accessories, features and functions including:

  • Quad band mobile phone capability enabling communications in most regions of the world
  • Colour Graphical User Interface (GUI)
  • Computer operating system
  • Bluetooth connectivity
  • Wifi capability
  • USB interface
  • SMS Text messaging
  • Email sending and reception
  • Camera
  • 3 axis accelerometer - Changes the display from portrait to landscape(or vice versa) when the phone is rotated.
  • Proximity sensor - Turns off the display when the phone is close to the user's ear to save battery energy.
  • Memory capacity - Up to 16GB sufficient for storing large music and image libraries
  • Music playback capability
  • Video playback capability
  • Internet, HTML, browser providing access to Internet services
  • Access, via Apple and under their control, to the device's operating system and memory to upload software based applications, known as "Apps".
  • Access to Apple's media libraries from which music and apps could be purchased and downloaded. This led to the development of a huge volume of third party applications which brought valuable financial benefits to both Apple and the software developers.
  • In 2008, GPS functionality, a compass, motion sensors with 9 degrees of freedom, 2 image sensors and an ambient light sensor were added to the iPhone which enabled satellite navigation and a plethora of location based services to be developed. Video recording capability was also made available.
  • Built in videophone calling was introduced in 2010 by the addition of a second (front facing) camera.

All of this functionality was accommodated in a glass fronted package only 115mm long x 61mm wide and 11.6mm thick (4.53in x 2.40in x 0.46in) weighing 135 grams (4.76 oz). They even managed to squeeze a Lithium battery and its power management circuits into the case as well.


The iPhone was obviously not just the work of a pair of creative innovators, Jobs and Ive, working alone. Hundreds of engineers working at Apple and their partner companies, contributed to the design, in the process, spawning over 200 patents protecting their ideas.


By 2918 more than half of all website traffic worldwide was generated through mobile phones.


See more at Apple Early History

See also the Motorola Cell Phone

See also Key Internet technologies


2007 Sony announces the Sugar Battery, a Biofuel Cell using glucose as its fuel with enzymes for catalysts, developed by Tsuyonobu Hatazawa working with Professor Kenji Kano from Kyoto University. It consists of an anode and a cathode separated by a proton-conducting membrane. A renewable fuel, such as a sugar, is oxidised by microorganisms at the anode, generating electrons and protons. The protons migrate through the membrane to the cathode while the electrons are transferred to the cathode by an external circuit. The electrons and protons combine with Oxygen at the cathode to form water. It is expected to find use in medical applications.


2008 American inventor Lonnie Johnson discovered a breakthrough new method of turning heat into electrical energy which he used in the design of a new form of thermoelectric battery, details of which were published in the January issue of Popular Mechanics.

Called the JTEC (Johnson Thermoelectric Energy Conversion) System, it is an all solid-state heat engine using hydrogen as the working fluid circulating between two Membrane-Electrode Assembly (MEA) stacks, one hot, and one cold, operating on a thermodynamic cycle similar to the Ericsson Cycle. It depends on the electro-chemical potential of hydrogen pressure when applied across a Proton Conductive Membrane (PCM).

The MEA is similar to a fuel cell stack and consists of a membrane and a pair of electrodes. On the high-pressure side of the MEA, Hydrogen gas is ionised (oxidised) releasing protons and electrons. The pressure differential across the stack forces protons through the membrane while the electrons flow through an external load. On the low pressure side, the protons are reduced by the electrons to reform Hydrogen gas.

If current is passed through the MEA a low-pressure gas can be "pumped" to a higher pressure.

The cycle needs an electrical jolt to start the proton flow. The resulting pressure differential produces voltage across each of the MEA stacks. The higher voltage at the high-temperature stack forces the low-temperature stack to pump Hydrogen from low pressure to high pressure, maintaining the pressure differential.


The system can be compared to a gas turbine engine, the low temperature MEA stack is equivalent to the compressor stage and the high temperature MEA is the power stage.

The available energy is equal to the difference between the energy produced by the high pressure stack and the energy consumed by the low pressure stack. The larger the temperature differential between the stacks the higher the efficiency. Johnson claims he can achieve conversion efficiencies of over 60 percent.


There has been very little news about the device since the initial announcement, nor have any new practical products been seen.


2010


2011 Researchers Yu-Chueh Hung, Wei-Ting Hsu and Ting-Yu Lin at the Institute of Photonics Technologies at Taiwan's National University (TNU) working with Ljiljana Fruk at the Centre for Functional Nanostructures at Karlsruhe Institute of Technology (KIT) in Germany, demonstrated a photoinduced write-once read-many-times (WORM) organic memory device based on DNA biopolymer nanocomposite (published in AIP Applied Physics Letters). In other words they showed that DNA can be used as a data storage medium.

The device consisted of a thin film of salmon (fish) DNA, embedded with nano-sized particles of Silver and then sandwiched between two electrodes. Ultraviolet light was used to encode information. Shining UV light on the system causes the Silver atoms to cluster into nano-sized particles which provide the platform for the data encoding. The device was able to hold charge under a low current, which corresponds to the off-state. Under a high electrical field the charges pass through the device, which then corresponds to the on-state the device. Once the system had been turned on, it stayed on and changing the voltage across the electrodes did not change the system's conductivity. Thus the information could be written to the device but not overwritten and once written, the device could possibly retain that information indefinitely.


2012 Following on the previous year's research in Taiwan's TNU and Karlsruhe's KIT, Harvard researchers George Church and Sri Kosuri took a major step towards producing a practical DNA Data Storage device by successfully storing 700 terabytes of data (5.5 petabits) in a single gram of DNA.


2012 Researchers, Donald Sadoway and David Bradwell, working at MIT produced working prototypes of a liquid metal battery using Magnesium-Antimony molten salts. See a description of the Liquid Metal Battery.


2013 A team working at the US National Ignition Facility (NIF) at the Lawrence Livermore National Laboratory (LLNL), achieved the World's first "over unity" energy gain in a controlled fusion nuclear reaction, demonstrating the potential feasibility of power generation by means of Inertial Confinement Fusion. In an experiment they irradiated a tiny 2 millimetre pellet of frozen Deuterium and Tritium fuel with a single pulse of 10 kJoules (2.8 Watthours) of X-ray energy, derived from 2.8 MJoules of energy from 192 powerful laser beams, to produce 14 kJoules (3.9 Watthours) of fusion energy, demonstrating a 1.4 conversion gain. Two months later they were able to produce 17 kJoules of energy from a similar set up.

See details of the NIF reactor.


The Inertial Confinement Fusion (ICF) programme resulted from a 1957 meeting on the peaceful use of nuclear weapons, arranged by Edward Teller at LLNL after which 26 year old physicist, John Nuckolls, picked up and ran with the idea of generating energy from a very small controlled thermonuclear explosion in a scaled down version of the Hydrogen bomb. He explored several alternative methods but lacked a controllable high energy driver for initiating fusion to replace the atom (fission) bomb used by the Hydrogen bomb for this purpose. In 1960, the invention of the ruby laser by Theodore Maiman provided the solution. Two years later, LLNL established its first laser fusion project. Nuckolls was joined by another LLNL physicist, Ray Kidder, from the weapons design group who immersed himself in laser research and carried out the calculations characterising laser driven implosions.

The construction of the NIF began in 1997 but was not completed until 2009 after which the first large scale laser fusion experiments commenced. The project manager responsible for completing construction and bringing into full operation the world's largest laser system was Ed Moses. The director of the laser fusion energy program which carried out the experiments was Mike Dunne with Omar Hurricane as the chief scientist.


2014


2015 What have you got in mind? Give us a call - We'd like to help you. Email office@woodbank.com



Feedback If you have any questions, suggestions or comments about these pages please use the Feedback Form to let us know.


Thank you in advance for your interest.

 

 

 

 

 

Printer image Print This Page || Home || FAQ || Site Map || Legal || Privacy Promise || Contacts

 
 

Woodbank Communications Ltd, South Crescent Road, Chester, CH4 7AU, (United Kingdom)
Copyright © Woodbank Communications Ltd 2005

End cap

Top