Sunday, September 9, 2018

APPLICATION OF SCHRODINGER WAVE EQUATION

Application of Schrodinger Wave Equation

Particle in a One Dimensional Deep Potential Well
Let us consider a particle of mass ‘m’ in a deep well restricted to move in a one dimension (say x). Let us assume that the particle is free inside the well except during collision with walls from which it rebounds elastically.
The potential function is expressed as
for      ……..(1.60)
for       ……..(1.61)

Figure 1.9 : Particle in deep potential well
The probability of finding the particle outside the well is zero (i.e. ). Inside the well, the Schrödinger wave equation is written as
……..(1.62) Substituting   ……..(1.63), and writing the SWE for 1-D
We get   ……..(1.64)
The general equation of above equation may be expressed as
……..(1.65) Where A and  are constants to be determined by boundary conditions
Condition I: We have ψ = 0 at x = 0, therefore from equation (1.65)

As then or       ……..(1.66)
Condition II: Further ψ = 0 at x = L, and ,therefore  from equation (1.65)

As then or
           ;n = 1,2,3,4,…. ……..(1.67)
Substituting the value of k from (1.67) to (1.63)
 
This gives   

   n = 1,2,3,4,…. ……..(1.68)
From equation (1.68) En is the energy value (Eigen Value) of the particle in a well. It is clear that the energy values of the particle in well are discrete not continuous.

Figure 1.10 : Energy for Particle

Using (1.66) and (1.67) equation (1.65) becomes, the corresponding wave functions will be
  ……..(1.69) The probability density
……..(1.70)
The probability density is zero at x = 0 and x = L. since the particle is always within the well  




Substituting A in equation (1.68), we get
; n = 1,2,3,4,….         …  (1.71)    The above equation (1.71) is normalized wave function (Eigen function) belonging to energy value En

Figure 1.11 : Wave function for Particle
   


A free Particle

A particle is said to be free when no external force is acting on during its motion in the given region of space, and its potential energy V is constant.
Let us consider an electro is freely moving in space in positive x direction and not acted by any force, there potential will be zero. The Schrodinger wave equation reduces to
 …(1)Substituting, we get

As the electron is moving in one direction (say x axis), then the above equation can be written as
   ….(2)The general solution of the equation (2) is of the form 
The electron is not bounded and hence there are no restrictions on k. This implies that all the values of energy are allowed. The allowed energy values form a continuum and are given by
….(3)
The wave vector k describes the wave properties of the electron. It is seen from the relation that . Thus the plot of E as a function of k gives a parabola.

The momentum is well defined and in this case given by
Therefore, according to uncertainty principle it is difficult to assign a position to the electron.

MEDICAL PHYSICS

Image result for medical physics


Medical physics (also called biomedical physics, medical biophysics or applied physics in medicine) is, in general, the application of physics concepts, theories, and methods to medicine or healthcare. Medical physics departments may be found in hospitals or universities.


In the case of hospital work, the term medical physicist is the title of a specific healthcare profession, usually working within a hospital. Medical physicists are often found in the following healthcare specialties: diagnostic and interventional radiology (also known as medical imaging), nuclear medicine, radiation protection and radiation oncology.
A ribosome is a biological machine.


University departments are of two types. The first type are mainly concerned with preparing students for a career as a hospital medical physicist and research focuses on improving the practice of the profession. A second type (increasingly called 'biomedical physics') has a much wider scope and may include research in any applications of physics to medicine from the study of biomolecular structure to microscopy and nanomedicine. For example, physicist Richard Feynman theorized about the future of nanomedicine. He wrote about the idea of a medical use for biological machines (see nanobiotechnology). Feynman and Albert Hibbs suggested that certain repair machines might one day be reduced in size to the point that it would be possible to (as Feynman put it) "swallow the doctor". The idea was discussed in Feynman's 1959 essay There's Plenty of Room at the Bottom.[

Mission statement of medical physicists

In the case of hospital medical physics departments, the mission statement for medical physicists as adopted by the European Federation of Medical Physicists is the following:
"Medical Physicists will contribute to maintaining and improving the quality, safety and cost-effectiveness of healthcare services through patient-oriented activities requiring expert action, involvement or advice regarding the specification, selection, acceptance testing, commissioning, quality assurance/control and optimised clinical use of medical devices and regarding patient risks and protection from associated physical agents (e.g., x-rays, electromagnetic fields, laser light, radionuclides) including the prevention of unintended or accidental exposures; all activities will be based on current best evidence or own scientific research when the available evidence is not sufficient. The scope includes risks to volunteers in biomedical research, carers and comforters. The scope often includes risks to workers and public particularly when these impact patient risk"
The term "physical agents" refers to ionising and non-ionising electromagnetic radiations, static electric and magnetic fields, ultrasound, laser light and any other Physical Agent associated with medical e.g., x-rays in computerised tomography (CT), gamma rays/radionuclides in nuclear medicine, magnetic fields and radio-frequencies in magnetic resonance imaging (MRI), ultrasound in ultrasound imaging and Doppler measurements.
This mission includes the following 11 key activities:
  1. Scientific problem solving service: Comprehensive problem solving service involving recognition of less than optimal performance or optimised use of medical devices, identification and elimination of possible causes or misuse, and confirmation that proposed solutions have restored device performance and use to acceptable status. All activities are to be based on current best scientific evidence or own research when the available evidence is not sufficient.
  2. Dosimetry measurements: Measurement of doses suffered by patients, volunteers in biomedical research, carers, comforters and persons subjected to non-medical imaging exposures (e.g., for legal or employment purposes); selection, calibration and maintenance of dosimetry related instrumentation; independent checking of dose related quantities provided by dose reporting devices (including software devices); measurement of dose related quantities required as inputs to dose reporting or estimating devices (including software). Measurements to be based on current recommended techniques and protocols. Includes dosimetry of all physical agents.
  3. Patient safety/risk management (including volunteers in biomedical research, carers, comforters and persons subjected to non-medical imaging exposures. Surveillance of medical devices and evaluation of clinical protocols to ensure the ongoing protection of patients, volunteers in biomedical research, carers, comforters and persons subjected to non-medical imaging exposures from the deleterious effects of physical agents in accordance with the latest published evidence or own research when the available evidence is not sufficient. Includes the development of risk assessment protocols.
  4. Occupational and public safety/risk management (when there is an impact on medical exposure or own safety). Surveillance of medical devices and evaluation of clinical protocols with respect to protection of workers and public when impacting the exposure of patients, volunteers in biomedical research, carers, comforters and persons subjected to non-medical imaging exposures or responsibility with respect to own safety. Includes the development of risk assessment protocols in conjunction with other experts involved in occupational / public risks.
  5. Clinical medical device management: Specification, selection, acceptance testing, commissioning and quality assurance/ control of medical devices in accordance with the latest published European or International recommendations and the management and supervision of associated programmes. Testing to be based on current recommended techniques and protocols.
  6. Clinical involvement: Carrying out, participating in and supervising everyday radiation protection and quality control procedures to ensure ongoing effective and optimised use of medical radiological devices and including patient specific optimization.
  7. Development of service quality and cost-effectiveness: Leading the introduction of new medical radiological devices into clinical service, the introduction of new medical physics services and participating in the introduction/development of clinical protocols/techniques whilst giving due attention to economic issues.
  8. Expert consultancy: Provision of expert advice to outside clients (e.g., clinics with no in-house medical physics expertise).
  9. Education of healthcare professionals (including medical physics trainees: Contributing to quality healthcare professional education through knowledge transfer activities concerning the technical-scientific knowledge, skills and competences supporting the clinically effective, safe, evidence-based and economical use of medical radiological devices. Participation in the education of medical physics students and organisation of medical physics residency programmes.
  10. Health technology assessment (HTA): Taking responsibility for the physics component of health technology assessments related to medical radiological devices and /or the medical uses of radioactive substances/sources.
  11. Innovation: Developing new or modifying existing devices (including software) and protocols for the solution of hitherto unresolved clinical problems.

Medical biophysics and biomedical physics

Some education institutions house departments or programs bearing the title "medical biophysics" or "biomedical physics" or "applied physics in medicine". Generally, these fall into one of two categories: interdisciplinary departments that house biophysics, radiobiology, and medical physics under a single umbrella; and undergraduate programs that prepare students for further study in medical physics, biophysics, or medicine.

Areas of specialty

The International Organization for Medical Physics (IOMP) recognizes main areas of medical physics employment and focus. These are:

Medical imaging physics

Para-sagittal MRI of the head in a patient with benign familial macrocephaly.

Medical imaging physics is also known as diagnostic and interventional radiology physics. Clinical (both "in-house" and "consulting") physicists typically deal with areas of testing, optimization, and quality assurance of diagnostic radiology physics areas such as radiographic X-rays, fluoroscopy, mammography, angiography, and computed tomography, as well as non-ionizing radiation modalities such as ultrasound, and MRI. They may also be engaged with radiation protection issues such as dosimetry (for staff and patients). In addition, many imaging physicists are often also involved with nuclear medicine systems, including single photon emission computed tomography (SPECT) and positron emission tomography (PET). Sometimes, imaging physicists may be engaged in clinical areas, but for research and teaching purposes,  such as quantifying intravascular ultrasound as a possible method of imaging a particular vascular object.

Radiation therapeutic physics

Radiation therapeutic physics is also known as radiotherapy physics or radiation oncology physics. The majority of medical physicists currently working in the US, Canada, and some western countries are of this group. A radiation therapy physicist typically deals with linear accelerator (Linac) systems and kilovoltage x-ray treatment units on a daily basis, as well as other modalities such as TomoTherapy, gamma knife, cyberknife, proton therapy, and brachytherapy. The academic and research side of therapeutic physics may encompass fields such as boron neutron capture therapy, sealed source radiotherapy, terahertz radiation, high-intensity focused ultrasound (including lithotripsy), optical radiation lasers, ultraviolet etc. including photodynamic therapy, as well as nuclear medicine including unsealed source radiotherapy, and photomedicine, which is the use of light to treat and diagnose disease.

Nuclear medicine physics

Nuclear medicine is a branch of medicine that uses radiation to provide information about the functioning of a person's specific organs or to treat disease. The thyroid, bones, heart, liver and many other organs can be easily imaged, and disorders in their function revealed. In some cases radiation sources can be used to treat diseased organs, or tumours. Five Nobel laureates have been intimately involved with the use of radioactive tracers in medicine. Over 10,000 hospitals worldwide use radioisotopes in medicine, and about 90% of the procedures are for diagnosis. The most common radioisotope used in diagnosis is technetium-99m, with some 30 million procedures per year, accounting for 80% of all nuclear medicine procedures worldwide.

Health physics

Health physics is also known as radiation safety or radiation protection. Health physics is the applied physics of radiation protection for health and health care purposes. It is the science concerned with the recognition, evaluation, and control of health hazards to permit the safe use and application of ionizing radiation. Health physics professionals promote excellence in the science and practice of radiation protection and safety.
  • Background radiation
  • Radiation protection
  • Dosimetry
  • Health physics
  • Radiological protection of patients

Non-ionizing Medical Radiation Physics

Some aspects of non-ionising radiation physics may be considered under radiation protection or diagnostic imaging physics. Imaging modalities include MRI, optical imaging and ultrasound. Safety considerations include these areas and lasers
  • Lasers and applications in medicine

Physiological measurement

Physiological measurements have also been used to monitor and measure various physiological parameters. Many physiological measurement techniques are non-invasive and can be used in conjunction with, or as an alternative to, other invasive methods. Measurement methods include electrocardiography Many of these areas may be covered by other specialities, for example medical engineering or vascular science.

Healthcare informatics and computational physics

Other closely related fields to medical physics include fields which deal with medical data, information technology and computer science for medicine.

  • Information and communication in medicine
  • Medical informatics
  • Image processing, display and visualization
  • Computer-aided diagnosis
  • Picture archiving and communication systems (PACS)
  • Standards: DICOM, ISO, IHE
  • Hospital information systems
  • e-Health
  • Telemedicine
  • Digital operating room
  • Workflow, patient-specific modeling
  • Medicine on the Internet of Things
  • Distant monitoring and telehomecare




Thursday, September 6, 2018

WHAT MAKES QUANTUM MECHANIC SO WEIRD

Related image

Quantum physics : the laws that govern the behavior of smallest components of our universe, such as fundamental particles, atoms and molecules—is admittedly a tough subject, a complicated path of intricate mathematics and scientific theory. Those outside the field who brave the journey often find themselves in a confusing place where the classical principles they learned in school no longer apply and the new rules seem…well…a bit unbelievable. In the quantum world, things can be in two places at once? Better yet, they can be two things at once? What???

   If this has been your experience, don’t worry—you’re in very good company. Respected scientists, including Albert Einstein, felt the same way, and made many attempts to prove that these strange new theories couldn’t be correct. Each attempt, however, failed, and instead reinforced the reality of quantum physics in contrast to our conventional intuition. But this is good newt he properties buried in quantum theory hold great promise for exciting, real-world applications.

So how do we make sense of these bizarre new rules? What really makes quantum physics so different, so strange, and so promising? To start, let’s take a look back to 1900 and the work of physicist Max Planck, who first drew back the curtain on the mysterious quantum world.
That year, Planck was embroiled in a nagging physics problem, how to explain the radiation of light emanating from hot objects. At the time, there were two conflicting laws, neither of which was quite right. Sandwiching visible light on the electromagnetic spectrum are infrared waves, which have longer wavelengths and a lower frequency, and ultraviolet waves, which have shorter wavelengths and a higher frequency.  One law, Wien’s law could accurately predict the experimental results of ultraviolet waves, but fell apart when it came to infrared waves. Conversely, the Rayleigh-Jeans law covered infrared waves, but didn’t work for ultraviolet. What Planck needed, then, was one law that would correctly apply to both ends of the spectrum.

   For the birth of quantum physics, the details of Planck’s solution to this problem were far less important than the trick he used to arrive at it. This trick, which Planck later on called “happy guesswork,” was simple but unsettling: the radiation energy had to be chopped up into tiny packages, or particles of light. Based on everything physicists knew at the time, this claim was outrageous: light was understood as a wave, which left little space for particles of light, nowadays known as photons. So now light could be…both? While it was not his intent, Planck’s trick was the first step in a chain reaction that turned the physics world upside-down.

    We now understand that it’s not just light, but all of the fundamental components of our universe that embrace this dual nature and the other properties of the quantum world. To explain, let’s take another step back, this time to our early science education, and picture electrons—the negatively charged fundamental particles that, together with the positively charged protons and neutral neutrons, make up atoms. Are you picturing them as miniature billiard balls? What about a light wave? Do you imagine it as a tiny version of what comes crashing against the shoreline?

   These are convenient pictures, because they are easy to imagine. But what is your evidence that these mental pictures really describe the nature of an electron, and the nature of light? With your sensory perception, you cannot see a single electron, nor observe a light wave oscillate. And, as it turns out, neither light, nor electrons, nor atoms, nor even molecules are simply waves, or just particles.

   When it comes to strange quantum properties, this dual wave-particle nature is just the tip of the iceberg. One of the most striking concepts is that of quantum entanglement. It can be illustrated like this: imagine being the proud parent of two children, Susy and Sam, who have just hit the age of disagreeing with each other all the time. They both like mac & cheese as well as pizza. Sadly, this is no longer sufficient to guarantee a drama-free dinner. As a counter strategy, you and your partner team up and question Sam and Susy simultaneously in different rooms. This way, they cannot coordinate their dissent, and you have a 50 percent chance of random agreement on the dinner choice.

Believe it or not, in the quantum world you would be doomed. In an experiment, the two parties could be photons, and the dinner question could be a measurement of their polarization. Polarization corresponds to the direction of oscillation—moving up and down or from side to side—when light behaves as a wave. Even if you separate the two parties, eliminating all communication, quantum physics allows for an invisible link between them known as entanglement. Quantum-Susy might change her answer from day to day (even pizza gets boring after a while), but every single time there is perfect anti-correlation with quantum-Sam’s answer: if one wants pizza, the other opts for mac & cheese—all the time!
This is just one example of the many bizarre properties we know to be true based on careful calculation and experimentation. But if we’re so sure, why do we witness so little of the quantum world?

   Much of quantum physics happens at length scales so small that they remain hidden to us, even when using the most powerful microscopes. In addition, witnessing quantum physics at work turns out to be radically different from what you might call an “observation.” Seeing that an object is the color red is a fairly straightforward, unobtrusive process. Probing a quantum object like an electron or photon is an entirely different matter. True quantum behavior tends to be fragile, and attempting to measure it often constitutes a major but unavoidable disruption that usually prevents quantum weirdness from becoming directly visible.

   However, just because we cannot see quantum physics in action doesn’t mean that is hasn’t affected our lives in a tangible, positive way. The impact of quantum physics has been enormous: not only is it the prime common factor in nearly all physics Nobel Prizes awarded in the past one-hundred years, but it has also been a crucial driving force in technological advances ranging from lasers and superconductors to medical imaging like MRIs. Indeed, imagining a world in which quantum physics had never been discovered would amount to eliminating a lot of the technology we take for granted each and every day.

    The grandest vision, perhaps, is that of harnessing the power of quantum physics for a completely new kind of supercomputer. Such a quantum computer could solve tasks in a heartbeat that would currently require centuries of computation time on the fastest computers available today. Sounds intriguing? Many physicists around the world working on the hardware of such a machine would agree.

   They would also explain, however, how daunting the challenges are in this endeavor. Overcoming the fragile nature of quantum behavior is not an easy task—one that rivals the quantum leap of faith taken by Planck and his colleagues to bring us into this new and exciting world.

THE STARS



A star is type of astronomical object consisting of a luminous spheroid of plasma held together by its own gravity. The nearest star to Earth is the Sun. Many other stars are visible to the naked eye from Earth during the night, appearing as a multitude of fixed luminous points in the sky due to their immense distance from Earth. Historically, the most prominent stars were grouped into constellations and asterisms, the brightest of which gained proper names. Astronomers have assembled star catalogues that identify the known stars and provide standardized stellar designations. However, most of the stars in the Universe, including all stars outside our galaxy, the Milky Way, are invisible to the naked eye from Earth. Indeed, most are invisible from Earth even through the most powerful telescopes.

Image result for powerful telescope

For at least a portion of its life, a star shines due to thermonuclear fusion of hydrogen into helium in its core, releasing energy that traverses the star's interior and then radiates into outer space. Almost all naturally occurring elements heavier than helium are created by stellar nucleosynthesis during the star's lifetime, and for some stars by supernova nucleosynthesis when it explodes. Near the end of its life, a star can also contain degenerate matter. Astronomers can determine the mass, age, metallicity (chemical composition), and many other properties of a star by observing its motion through space, its luminosity, and spectrum respectively. The total mass of a star is the main factor that determines its evolution and eventual fate. Other characteristics of a star, including diameter and temperature, change over its life, while the star's environment affects its rotation and movement. A plot of the temperature of many stars against their luminosities produces a plot known as a Hertzsprung–Russell diagram (H–R diagram). Plotting a particular star on that diagram allows the age and evolutionary state of that star to be determined.

A star's life begins with the gravitational collapse of a gaseous nebula of material composed primarily of hydrogen, along with helium and trace amounts of heavier elements. When the stellar core is sufficiently dense, hydrogen becomes steadily converted into helium through nuclear fusion, releasing energy in the process. The remainder of the star's interior carries energy away from the core through a combination of radiative and convective heat transfer processes. The star's internal pressure prevents it from collapsing further under its own gravity. A star with mass greater than 0.4 times the Sun's will expand to become a red giant when the hydrogen fuel in its core is exhausted.In some cases, it will fuse heavier elements at the core or in shells around the core. As the star expands it throws a part of its mass, enriched with those heavier elements, into the interstellar environment, to be recycled later as new stars.Meanwhile, the core becomes a stellar remnant: a white dwarf, a neutron star, or if it is sufficiently massive a black hole.


Image result for blackhole



Binary and multi-star systems consist of two or more stars that are gravitationally bound and generally move around each other in stable orbits. When two such stars have a relatively close orbit, their gravitational interaction can have a significant impact on their evolution. Stars can form part of a much larger gravitationally bound structure, such as a star cluster or a galaxy.
Historically, stars have been important to civilizations throughout the world. They have been part of religious practices and used for celestial navigation and orientation. Many ancient astronomers believed that stars were permanently affixed to a heavenly sphere and that they were immutable. By convention, astronomers grouped stars into constellations and used them to track the motions of the planets and the inferred position of the Sun. The motion of the Sun against the background stars (and the horizon) was used to create calendars, which could be used to regulate agricultural practices. The Gregorian calendar, currently used nearly everywhere in the world, is a solar calendar based on the angle of the Earth's rotational axis relative to its local star, the Sun.

The oldest accurately dated star chart was the result of ancient Egyptian astronomy in 1534 BC.The earliest known star catalogues were compiled by the ancient Babylonian astronomers of Mesopotamia in the late 2nd millennium BC, during the Kassite Period (ca. 1531–1155 BC).

The first star catalogue in Greek astronomy was created by Aristillus in approximately 300 BC, with the help of Timocharis. The star catalog of Hipparchus (2nd century BC) included 1020 stars, and was used to assemble Ptolemy's star catalogue.  Hipparchus is known for the discovery of the first recorded nova (new star). Many of the constellations and star names in use today derive from Greek astronomy.

In spite of the apparent immutability of the heavens, Chinese astronomers were aware that new stars could appear. In 185 AD, they were the first to observe and write about a supernova, now known as the SN 185. The brightest stellar event in recorded history was the SN 1006 supernova, which was observed in 1006 and written about by the Egyptian astronomer Ali ibn Ridwan and several Chinese astronomers. The SN 1054 supernova, which gave birth to the Crab Nebula, was also observed by Chinese and Islamic astronomers.

Medieval Islamic astronomers gave Arabic names to many stars that are still used today and they invented numerous astronomical instruments that could compute the positions of the stars. They built the first large observatory research institutes, mainly for the purpose of producing Zij star catalogues. Among these, the Book of Fixed Stars (964) was written by the Persian astronomer Abd al-Rahman al-Sufi, who observed a number of stars, star clusters (including the Omicron Velorum and Brocchi's Clusters) and galaxies (including the Andromeda Galaxy). According to A. Zahoor, in the 11th century, the Persian polymath scholar Abu Rayhan Biruni described the Milky Way galaxy as a multitude of fragments having the properties of nebulous stars, and also gave the latitudes of various stars during a lunar eclipse in 1019.

According to Josep Puig, the Andalusian astronomer Ibn Bajjah proposed that the Milky Way was made up of many stars that almost touched one another and appeared to be a continuous image due to the effect of refraction from sublunary material, citing his observation of the conjunction of Jupiter and Mars on 500 AH (1106/1107 AD) as evidence. Early European astronomers such as Tycho Brahe identified new stars in the night sky (later termed novae), suggesting that the heavens were not immutable. In 1584, Giordano Bruno suggested that the stars were like the Sun, and may have other planets, possibly even Earth-like, in orbit around them, an idea that had been suggested earlier by the ancient Greek philosophers, Democritus and Epicurus, and by medieval Islamic cosmologists such as Fakhr al-Din al-Razi. By the following century, the idea of the stars being the same as the Sun was reaching a consensus among astronomers. To explain why these stars exerted no net gravitational pull on the Solar System, Isaac Newton suggested that the stars were equally distributed in every direction, an idea prompted by the theologian Richard Bentley.

The Italian astronomer Geminiano Montanari recorded observing variations in luminosity of the star Algol in 1667. Edmond Halley published the first measurements of the proper motion of a pair of nearby "fixed" stars, demonstrating that they had changed positions since the time of the ancient Greek astronomers Ptolemy and Hipparchus.

William Herschel was the first astronomer to attempt to determine the distribution of stars in the sky. During the 1780s, he established a series of gauges in 600 directions and counted the stars observed along each line of sight. From this he deduced that the number of stars steadily increased toward one side of the sky, in the direction of the Milky Way core. His son John Herschel repeated this study in the southern hemisphere and found a corresponding increase in the same direction. In addition to his other accomplishments, William Herschel is also noted for his discovery that some stars do not merely lie along the same line of sight, but are also physical companions that form binary star systems.

The science of stellar spectroscopy was pioneered by Joseph von Fraunhofer and Angelo Secchi. By comparing the spectra of stars such as Sirius to the Sun, they found differences in the strength and number of their absorption lines—the dark lines in stellar spectra caused by the atmosphere's absorption of specific frequencies. In 1865, Secchi began classifying stars into spectral types. However, the modern version of the stellar classification scheme was developed by Annie J. Cannon during the 1900s.


The first direct measurement of the distance to a star (61 Cygni at 11.4 light-years) was made in 1838 by Friedrich Bessel using the parallax technique. Parallax measurements demonstrated the vast separation of the stars in the heavens.  Observation of double stars gained increasing importance during the 19th century. In 1834, Friedrich Bessel observed changes in the proper motion of the star Sirius and inferred a hidden companion. Edward Pickering discovered the first spectroscopic binary in 1899 when he observed the periodic splitting of the spectral lines of the star Mizar in a 104-day period. Detailed observations of many binary star systems were collected by astronomers such as Friedrich Georg Wilhelm von Struve and S. W. Burnham, allowing the masses of stars to be determined from computation of orbital elements. The first solution to the problem of deriving an orbit of binary stars from telescope observations was made by Felix Savary in 1827. The twentieth century saw increasingly rapid advances in the scientific study of stars. The photograph became a valuable astronomical tool. Karl Schwarzschild discovered that the color of a star and, hence, its temperature, could be determined by comparing the visual magnitude against the photographic magnitude. The development of the photoelectric photometer allowed precise measurements of magnitude at multiple wavelength intervals. In 1921 Albert A. Michelson made the first measurements of a stellar diameter using an interferometer on the Hooker telescope at Mount Wilson Observatory.

Important theoretical work on the physical structure of stars occurred during the first decades of the twentieth century. In 1913, the Hertzsprung-Russell diagram was developed, propelling the astrophysical study of stars. Successful models were developed to explain the interiors of stars and stellar evolution. Cecilia Payne-Gaposchkin first proposed that stars were made primarily of hydrogen and helium in her 1925 PhD thesis. The spectra of stars were further understood through advances in quantum physics. This allowed the chemical composition of the stellar atmosphere to be determined.

With the exception of supernovae, individual stars have primarily been observed in the Local Group, and especially in the visible part of the Milky Way (as demonstrated by the detailed star catalogues available for our galaxy). But some stars have been observed in the M100 galaxy of the Virgo Cluster, about 100 million light years from the Earth. In the Local Supercluster it is possible to see star clusters, and current telescopes could in principle observe faint individual stars in the Local Group (see Cepheids). However, outside the Local Supercluster of galaxies, neither individual stars nor clusters of stars have been observed. The only exception is a faint image of a large star cluster containing hundreds of thousands of stars located at a distance of one billion light years—ten times further than the most distant star cluster previously observed.

In February 2018, astronomers reported, for the first time, a signal of the reionization epoch, an indirect detection of light from the earliest stars formed - about 180 million years after the Big Bang.

In April, 2018, astronomers reported the detection of the most distant "ordinary" (i.e., main sequence) star, named Icarus (formally, MACS J1149 Lensed Star 1), at 9 billion light-years away from Earth.

In May 2018, astronomers reported the detection of the most distant oxygen ever detected in the Universe - and the most distant galaxy every observed by Atacama Large Millimeter Array or the Very Large Telescope - with the team inferring that the signal was emitted 13.3 billion years ago (or 500 million years after the Big Bang. They found that the observed brightness of the galaxy is well-explained by a model where the onset of star formation corresponds to only 250 million years after the Universe began, corresponding to a redshift of about 15.

Designations

The concept of a constellation was known to exist during the Babylonian period. Ancient sky watchers imagined that prominent arrangements of stars formed patterns, and they associated these with particular aspects of nature or their myths. Twelve of these formations lay along the band of the ecliptic and these became the basis of astrology. Many of the more prominent individual stars were also given names, particularly with Arabic or Latin designations.

As well as certain constellations and the Sun itself, individual stars have their own myths. To the Ancient Greeks, some "stars", known as planets (Greek πλανήτης (planētēs), meaning "wanderer"), represented various important deities, from which the names of the planets Mercury, Venus, Mars, Jupiter and Saturn were taken. (Uranus and Neptune were also Greek and Roman gods, but neither planet was known in Antiquity because of their low brightness. Their names were assigned by later astronomers.)

Circa 1600, the names of the constellations were used to name the stars in the corresponding regions of the sky. The German astronomer Johann Bayer created a series of star maps and applied Greek letters as designations to the stars in each constellation. Later a numbering system based on the star's right ascension was invented and added to John Flamsteed's star catalogue in his book "Historia coelestis Britannica" (the 1712 edition), whereby this numbering system came to be called Flamsteed designation or Flamsteed numbering.

The only internationally recognized authority for naming celestial bodies is the International Astronomical Union (IAU). The International Astronomical Union maintains the Working Group on Star Names (WGSN)[ which catalogs and standardizes proper names for stars. A number of private companies sell names of stars, which the British Library calls an unregulated commercial enterprise.  The IAU has disassociated itself from this commercial practice, and these names are neither recognized by the IAU, professional astronomers, nor the amateur astronomy community.One  such star-naming company is the International Star Registry, which, during the 1980s, was accused of deceptive practice for making it appear that the assigned name was official. This now-discontinued ISR practice was informally labeled a scam and a fraud, and the New York City Department of Consumer Affairs issued a violation against ISR for engaging in a deceptive trade practice.

Units of measurement

Although stellar parameters can be expressed in SI units or CGS units, it is often most convenient to express mass, luminosity, and radii in solar units, based on the characteristics of the Sun. In 2015, the IAU defined a set of nominal solar values (defined as SI constants, without uncertainties) which can be used for quoting stellar parameters:
nominal solar luminosity: L = 3.828 × 1026 W  
nominal solar radius R = 6.957 × 108 m  
The solar mass M was not explicitly defined by the IAU due to the large relative uncertainty (10−4) of the Newtonian gravitational constant G. However, since the product of the Newtonian gravitational constant and solar mass together (GM) has been determined to much greater precision, the IAU defined the nominal solar mass parameter to be:
nominal solar mass parameter: GM = 1.3271244 × 1020 m3 s−2  
However, one can combine the nominal solar mass parameter with the most recent (2014) CODATA estimate of the Newtonian gravitational constant G to derive the solar mass to be approximately 1.9885 × 1030 kg. Although the exact values for the luminosity, radius, mass parameter, and mass may vary slightly in the future due to observational uncertainties, the 2015 IAU nominal constants will remain the same SI values as they remain useful measures for quoting stellar parameters.

Large lengths, such as the radius of a giant star or the semi-major axis of a binary star system, are often expressed in terms of the astronomical unit — approximately equal to the mean distance between the Earth and the Sun (150 million km or approximately 93 million miles). In 2012, the IAU defined the astronomical constant to be an exact length in meters: 149,597,870,700 m.